Test Report: Hyperkit_macOS 19478

                    
                      cdbac7a92b6ef0941d2ffc9877dc4d64cf2ec5e1:2024-08-19:35858
                    
                

Test fail (26/201)

x
+
TestOffline (195.52s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-509000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-509000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m10.124107091s)

                                                
                                                
-- stdout --
	* [offline-docker-509000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-509000" primary control-plane node in "offline-docker-509000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-509000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:20:20.335254    8531 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:20:20.335528    8531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:20:20.335534    8531 out.go:358] Setting ErrFile to fd 2...
	I0819 11:20:20.335538    8531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:20:20.335694    8531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 11:20:20.337433    8531 out.go:352] Setting JSON to false
	I0819 11:20:20.363045    8531 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6590,"bootTime":1724085030,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 11:20:20.363159    8531 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:20:20.421291    8531 out.go:177] * [offline-docker-509000] minikube v1.33.1 on Darwin 14.6.1
	I0819 11:20:20.464391    8531 notify.go:220] Checking for updates...
	I0819 11:20:20.490266    8531 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 11:20:20.573007    8531 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 11:20:20.593284    8531 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 11:20:20.615177    8531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:20:20.638792    8531 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:20:20.660327    8531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:20:20.681409    8531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:20:20.710286    8531 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 11:20:20.752619    8531 start.go:297] selected driver: hyperkit
	I0819 11:20:20.752650    8531 start.go:901] validating driver "hyperkit" against <nil>
	I0819 11:20:20.752671    8531 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:20:20.757287    8531 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:20:20.757442    8531 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 11:20:20.765956    8531 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 11:20:20.769661    8531 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:20:20.769681    8531 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 11:20:20.769713    8531 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:20:20.769924    8531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:20:20.769959    8531 cni.go:84] Creating CNI manager for ""
	I0819 11:20:20.769977    8531 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:20:20.769981    8531 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:20:20.770056    8531 start.go:340] cluster config:
	{Name:offline-docker-509000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-509000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:20:20.770141    8531 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:20:20.837426    8531 out.go:177] * Starting "offline-docker-509000" primary control-plane node in "offline-docker-509000" cluster
	I0819 11:20:20.858428    8531 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:20:20.858504    8531 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 11:20:20.858546    8531 cache.go:56] Caching tarball of preloaded images
	I0819 11:20:20.858806    8531 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 11:20:20.858828    8531 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:20:20.859366    8531 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/offline-docker-509000/config.json ...
	I0819 11:20:20.859407    8531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/offline-docker-509000/config.json: {Name:mk8282b11815074f54a57ee048653c9376a3db13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:20:20.880444    8531 start.go:360] acquireMachinesLock for offline-docker-509000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:20:20.880622    8531 start.go:364] duration metric: took 133.862µs to acquireMachinesLock for "offline-docker-509000"
	I0819 11:20:20.880667    8531 start.go:93] Provisioning new machine with config: &{Name:offline-docker-509000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-509000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:20:20.880762    8531 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 11:20:20.923465    8531 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:20:20.923642    8531 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:20:20.923697    8531 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:20:20.932404    8531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53724
	I0819 11:20:20.932759    8531 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:20:20.933194    8531 main.go:141] libmachine: Using API Version  1
	I0819 11:20:20.933205    8531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:20:20.933425    8531 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:20:20.933533    8531 main.go:141] libmachine: (offline-docker-509000) Calling .GetMachineName
	I0819 11:20:20.933660    8531 main.go:141] libmachine: (offline-docker-509000) Calling .DriverName
	I0819 11:20:20.933806    8531 start.go:159] libmachine.API.Create for "offline-docker-509000" (driver="hyperkit")
	I0819 11:20:20.933831    8531 client.go:168] LocalClient.Create starting
	I0819 11:20:20.933865    8531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 11:20:20.933917    8531 main.go:141] libmachine: Decoding PEM data...
	I0819 11:20:20.933934    8531 main.go:141] libmachine: Parsing certificate...
	I0819 11:20:20.934016    8531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 11:20:20.934054    8531 main.go:141] libmachine: Decoding PEM data...
	I0819 11:20:20.934068    8531 main.go:141] libmachine: Parsing certificate...
	I0819 11:20:20.934081    8531 main.go:141] libmachine: Running pre-create checks...
	I0819 11:20:20.934089    8531 main.go:141] libmachine: (offline-docker-509000) Calling .PreCreateCheck
	I0819 11:20:20.934165    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:20.934325    8531 main.go:141] libmachine: (offline-docker-509000) Calling .GetConfigRaw
	I0819 11:20:20.934797    8531 main.go:141] libmachine: Creating machine...
	I0819 11:20:20.934805    8531 main.go:141] libmachine: (offline-docker-509000) Calling .Create
	I0819 11:20:20.934874    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:20.934986    8531 main.go:141] libmachine: (offline-docker-509000) DBG | I0819 11:20:20.934865    8552 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:20:20.935044    8531 main.go:141] libmachine: (offline-docker-509000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:20:21.492127    8531 main.go:141] libmachine: (offline-docker-509000) DBG | I0819 11:20:21.492034    8552 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/id_rsa...
	I0819 11:20:21.605875    8531 main.go:141] libmachine: (offline-docker-509000) DBG | I0819 11:20:21.605791    8552 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/offline-docker-509000.rawdisk...
	I0819 11:20:21.605889    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Writing magic tar header
	I0819 11:20:21.605898    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Writing SSH key tar header
	I0819 11:20:21.606244    8531 main.go:141] libmachine: (offline-docker-509000) DBG | I0819 11:20:21.606209    8552 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000 ...
	I0819 11:20:22.064445    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:22.064468    8531 main.go:141] libmachine: (offline-docker-509000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/hyperkit.pid
	I0819 11:20:22.064479    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Using UUID 054789de-0e36-49f2-aaf1-7723ceaf56d3
	I0819 11:20:22.253424    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Generated MAC 22:5:cc:b9:32:9e
	I0819 11:20:22.253442    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-509000
	I0819 11:20:22.253472    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"054789de-0e36-49f2-aaf1-7723ceaf56d3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0819 11:20:22.253509    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"054789de-0e36-49f2-aaf1-7723ceaf56d3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0819 11:20:22.253567    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "054789de-0e36-49f2-aaf1-7723ceaf56d3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/offline-docker-509000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/bzimage,
/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-509000"}
	I0819 11:20:22.253611    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 054789de-0e36-49f2-aaf1-7723ceaf56d3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/offline-docker-509000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machi
nes/offline-docker-509000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-509000"
	I0819 11:20:22.253626    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 11:20:22.256800    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 DEBUG: hyperkit: Pid is 8577
	I0819 11:20:22.257963    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 0
	I0819 11:20:22.257978    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:22.258053    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:22.258920    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:22.258995    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:22.259005    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:22.259014    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:22.259020    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:22.259027    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:22.259035    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:22.259041    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:22.259049    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:22.259061    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:22.259070    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:22.259080    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:22.259086    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:22.259117    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:22.259130    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:22.259139    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:22.259146    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:22.259163    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:22.259191    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:22.264326    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 11:20:22.372301    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 11:20:22.372909    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:20:22.372931    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:20:22.372939    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:20:22.372948    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:20:22.748337    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 11:20:22.748357    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 11:20:22.863268    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:20:22.863297    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:20:22.863327    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:20:22.863345    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:20:22.864140    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 11:20:22.864153    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 11:20:24.260909    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 1
	I0819 11:20:24.260921    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:24.260973    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:24.261818    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:24.261869    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:24.261883    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:24.261897    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:24.261904    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:24.261922    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:24.261937    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:24.261947    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:24.261953    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:24.261959    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:24.261966    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:24.261973    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:24.261982    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:24.261990    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:24.261997    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:24.262005    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:24.262012    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:24.262022    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:24.262037    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:26.262356    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 2
	I0819 11:20:26.262373    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:26.262461    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:26.263254    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:26.263302    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:26.263311    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:26.263321    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:26.263327    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:26.263333    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:26.263341    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:26.263347    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:26.263354    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:26.263360    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:26.263367    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:26.263373    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:26.263381    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:26.263398    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:26.263409    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:26.263416    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:26.263424    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:26.263430    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:26.263437    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:28.256051    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 11:20:28.256196    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 11:20:28.256208    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 11:20:28.264966    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 3
	I0819 11:20:28.264979    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:28.265046    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:28.265834    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:28.265865    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:28.265880    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:28.265894    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:28.265905    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:28.265914    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:28.265925    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:28.265935    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:28.265950    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:28.265962    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:28.265970    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:28.265982    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:28.265996    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:28.266004    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:28.266013    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:28.266026    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:28.266041    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:28.266055    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:28.266065    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:28.276079    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:20:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 11:20:30.266855    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 4
	I0819 11:20:30.266876    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:30.266987    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:30.267800    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:30.267892    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:30.267903    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:30.267914    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:30.267920    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:30.267929    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:30.267936    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:30.267945    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:30.267951    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:30.267958    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:30.267963    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:30.267970    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:30.267976    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:30.267986    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:30.267995    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:30.268017    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:30.268031    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:30.268041    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:30.268050    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:32.269767    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 5
	I0819 11:20:32.269783    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:32.269848    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:32.270629    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:32.270691    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:32.270706    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:32.270715    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:32.270721    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:32.270740    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:32.270757    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:32.270777    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:32.270795    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:32.270804    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:32.270812    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:32.270819    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:32.270827    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:32.270836    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:32.270850    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:32.270864    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:32.270878    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:32.270887    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:32.270894    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:34.271262    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 6
	I0819 11:20:34.271278    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:34.271321    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:34.272107    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:34.272156    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:34.272169    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:34.272180    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:34.272187    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:34.272194    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:34.272208    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:34.272224    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:34.272233    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:34.272246    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:34.272255    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:34.272263    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:34.272271    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:34.272277    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:34.272284    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:34.272320    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:34.272334    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:34.272341    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:34.272349    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:36.274357    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 7
	I0819 11:20:36.274373    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:36.274436    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:36.275228    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:36.275283    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:36.275296    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:36.275308    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:36.275327    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:36.275342    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:36.275352    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:36.275367    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:36.275375    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:36.275383    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:36.275392    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:36.275412    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:36.275428    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:36.275440    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:36.275452    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:36.275461    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:36.275469    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:36.275487    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:36.275501    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:38.276592    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 8
	I0819 11:20:38.276608    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:38.276645    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:38.277451    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:38.277512    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:38.277523    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:38.277530    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:38.277537    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:38.277544    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:38.277549    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:38.277558    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:38.277567    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:38.277574    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:38.277582    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:38.277589    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:38.277596    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:38.277613    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:38.277622    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:38.277636    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:38.277648    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:38.277667    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:38.277680    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:40.278585    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 9
	I0819 11:20:40.278602    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:40.278666    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:40.279435    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:40.279494    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:40.279507    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:40.279516    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:40.279526    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:40.279536    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:40.279551    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:40.279558    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:40.279566    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:40.279573    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:40.279581    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:40.279588    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:40.279596    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:40.279603    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:40.279611    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:40.279619    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:40.279627    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:40.279634    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:40.279640    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:42.281446    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 10
	I0819 11:20:42.281460    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:42.281479    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:42.282295    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:42.282308    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:42.282331    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:42.282339    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:42.282346    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:42.282352    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:42.282374    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:42.282385    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:42.282392    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:42.282402    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:42.282414    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:42.282421    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:42.282429    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:42.282436    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:42.282444    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:42.282451    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:42.282458    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:42.282466    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:42.282473    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:44.283601    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 11
	I0819 11:20:44.283621    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:44.283690    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:44.284476    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:44.284533    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:44.284543    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:44.284551    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:44.284558    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:44.284577    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:44.284593    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:44.284606    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:44.284615    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:44.284623    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:44.284632    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:44.284650    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:44.284659    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:44.284667    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:44.284675    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:44.284682    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:44.284690    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:44.284696    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:44.284702    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:46.286728    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 12
	I0819 11:20:46.286744    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:46.286798    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:46.287683    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:46.287734    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:46.287756    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:46.287769    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:46.287776    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:46.287789    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:46.287817    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:46.287836    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:46.287851    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:46.287860    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:46.287867    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:46.287874    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:46.287881    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:46.287886    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:46.287897    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:46.287909    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:46.287916    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:46.287923    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:46.287932    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:48.289948    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 13
	I0819 11:20:48.289968    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:48.290003    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:48.290910    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:48.290932    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:48.290952    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:48.290961    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:48.290981    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:48.290994    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:48.291002    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:48.291010    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:48.291017    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:48.291026    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:48.291033    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:48.291041    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:48.291049    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:48.291066    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:48.291089    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:48.291108    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:48.291117    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:48.291123    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:48.291131    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:50.291650    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 14
	I0819 11:20:50.291667    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:50.291723    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:50.292490    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:50.292516    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:50.292539    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:50.292555    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:50.292565    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:50.292572    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:50.292579    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:50.292585    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:50.292592    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:50.292598    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:50.292612    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:50.292628    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:50.292635    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:50.292644    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:50.292656    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:50.292663    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:50.292671    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:50.292678    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:50.292685    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:52.294742    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 15
	I0819 11:20:52.294755    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:52.294805    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:52.295729    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:52.295750    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:52.295763    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:52.295771    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:52.295777    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:52.295787    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:52.295813    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:52.295824    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:52.295832    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:52.295840    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:52.295847    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:52.295855    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:52.295863    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:52.295880    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:52.295893    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:52.295902    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:52.295913    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:52.295921    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:52.295929    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:54.296024    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 16
	I0819 11:20:54.296038    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:54.296093    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:54.297005    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:54.297013    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:54.297022    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:54.297029    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:54.297055    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:54.297067    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:54.297075    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:54.297083    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:54.297091    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:54.297099    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:54.297125    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:54.297139    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:54.297149    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:54.297158    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:54.297169    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:54.297178    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:54.297184    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:54.297192    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:54.297201    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:56.299172    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 17
	I0819 11:20:56.299187    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:56.299247    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:56.300111    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:56.300167    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:56.300179    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:56.300190    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:56.300197    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:56.300216    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:56.300231    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:56.300239    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:56.300248    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:56.300255    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:56.300263    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:56.300272    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:56.300280    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:56.300288    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:56.300296    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:56.300305    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:56.300313    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:56.300322    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:56.300331    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:20:58.302371    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 18
	I0819 11:20:58.302394    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:20:58.302433    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:20:58.303240    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:20:58.303290    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:20:58.303300    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:20:58.303310    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:20:58.303316    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:20:58.303323    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:20:58.303329    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:20:58.303349    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:20:58.303356    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:20:58.303367    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:20:58.303375    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:20:58.303381    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:20:58.303390    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:20:58.303401    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:20:58.303410    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:20:58.303417    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:20:58.303425    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:20:58.303434    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:20:58.303442    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:00.305446    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 19
	I0819 11:21:00.305463    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:00.305530    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:00.306381    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:21:00.306415    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:00.306423    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:00.306431    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:00.306449    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:00.306463    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:00.306479    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:00.306504    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:00.306517    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:00.306534    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:00.306544    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:00.306554    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:00.306565    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:00.306573    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:00.306579    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:00.306588    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:00.306596    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:00.306614    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:00.306627    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:02.308636    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 20
	I0819 11:21:02.308650    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:02.308718    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:02.309543    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:21:02.309596    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:02.309608    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:02.309617    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:02.309628    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:02.309638    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:02.309646    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:02.309669    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:02.309690    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:02.309700    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:02.309706    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:02.309715    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:02.309728    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:02.309736    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:02.309744    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:02.309753    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:02.309760    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:02.309765    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:02.309774    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:04.310386    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 21
	I0819 11:21:04.310398    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:04.310460    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:04.311268    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:21:04.311317    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:04.311328    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:04.311341    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:04.311351    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:04.311368    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:04.311381    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:04.311396    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:04.311408    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:04.311417    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:04.311425    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:04.311432    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:04.311439    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:04.311451    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:04.311461    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:04.311469    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:04.311476    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:04.311491    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:04.311503    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:06.312696    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 22
	I0819 11:21:06.312708    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:06.312785    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:06.313624    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:21:06.313702    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:06.313713    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:06.313725    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:06.313733    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:06.313739    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:06.313746    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:06.313764    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:06.313785    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:06.313799    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:06.313807    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:06.313815    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:06.313822    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:06.313830    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:06.313840    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:06.313848    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:06.313856    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:06.313863    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:06.313879    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:08.315894    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 23
	I0819 11:21:08.315907    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:08.315967    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:08.316796    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:21:08.316845    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:08.316855    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:08.316864    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:08.316870    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:08.316883    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:08.316894    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:08.316915    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:08.316925    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:08.316945    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:08.316953    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:08.316962    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:08.316977    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:08.316990    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:08.317000    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:08.317015    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:08.317030    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:08.317043    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:08.317053    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:10.319052    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 24
	I0819 11:21:10.319069    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:10.319102    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:10.320028    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:21:10.320061    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:10.320071    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:10.320078    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:10.320085    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:10.320096    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:10.320103    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:10.320110    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:10.320119    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:10.320130    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:10.320137    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:10.320146    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:10.320153    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:10.320159    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:10.320172    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:10.320187    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:10.320202    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:10.320217    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:10.320235    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:12.322288    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 25
	I0819 11:21:12.322306    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:12.322356    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:12.323293    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:21:12.323345    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:12.323365    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:12.323379    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:12.323400    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:12.323422    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:12.323434    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:12.323441    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:12.323448    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:12.323454    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:12.323462    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:12.323470    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:12.323493    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:12.323510    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:12.323522    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:12.323531    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:12.323536    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:12.323565    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:12.323578    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:14.324534    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 26
	I0819 11:21:14.324549    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:14.324597    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:14.325583    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:21:14.325622    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:14.325638    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:14.325651    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:14.325659    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:14.325667    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:14.325673    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:14.325687    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:14.325698    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:14.325704    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:14.325710    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:14.325717    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:14.325723    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:14.325729    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:14.325737    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:14.325743    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:14.325749    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:14.325768    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:14.325780    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:16.326825    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 27
	I0819 11:21:16.326837    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:16.326921    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:16.327803    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:21:16.327844    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:16.327857    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:16.327874    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:16.327885    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:16.327899    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:16.327912    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:16.327920    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:16.327926    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:16.327932    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:16.327939    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:16.327946    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:16.327960    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:16.327972    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:16.327982    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:16.327990    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:16.327997    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:16.328006    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:16.328020    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:18.330048    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 28
	I0819 11:21:18.330064    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:18.330127    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:18.330947    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:21:18.330988    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:18.331009    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:18.331045    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:18.331058    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:18.331065    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:18.331074    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:18.331097    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:18.331110    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:18.331118    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:18.331133    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:18.331141    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:18.331147    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:18.331160    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:18.331171    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:18.331179    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:18.331187    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:18.331194    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:18.331200    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:20.332437    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 29
	I0819 11:21:20.332451    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:20.332520    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:20.333361    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 22:5:cc:b9:32:9e in /var/db/dhcpd_leases ...
	I0819 11:21:20.333424    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:20.333434    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:20.333454    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:20.333461    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:20.333468    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:20.333474    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:20.333481    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:20.333489    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:20.333505    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:20.333517    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:20.333525    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:20.333532    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:20.333539    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:20.333546    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:20.333554    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:20.333565    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:20.333573    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:20.333582    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:22.334950    8531 client.go:171] duration metric: took 1m1.400872207s to LocalClient.Create
	I0819 11:21:24.337073    8531 start.go:128] duration metric: took 1m3.456054198s to createHost
	I0819 11:21:24.337121    8531 start.go:83] releasing machines lock for "offline-docker-509000", held for 1m3.456233449s
	W0819 11:21:24.337140    8531 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:5:cc:b9:32:9e
	I0819 11:21:24.337453    8531 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:21:24.337490    8531 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:21:24.347497    8531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53761
	I0819 11:21:24.347838    8531 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:21:24.348218    8531 main.go:141] libmachine: Using API Version  1
	I0819 11:21:24.348235    8531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:21:24.348458    8531 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:21:24.348841    8531 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:21:24.348890    8531 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:21:24.357586    8531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53763
	I0819 11:21:24.358032    8531 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:21:24.358456    8531 main.go:141] libmachine: Using API Version  1
	I0819 11:21:24.358490    8531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:21:24.358843    8531 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:21:24.358975    8531 main.go:141] libmachine: (offline-docker-509000) Calling .GetState
	I0819 11:21:24.359086    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:24.359196    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:24.360239    8531 main.go:141] libmachine: (offline-docker-509000) Calling .DriverName
	I0819 11:21:24.421451    8531 out.go:177] * Deleting "offline-docker-509000" in hyperkit ...
	I0819 11:21:24.442401    8531 main.go:141] libmachine: (offline-docker-509000) Calling .Remove
	I0819 11:21:24.442584    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:24.442600    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:24.442667    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:24.443604    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:24.443672    8531 main.go:141] libmachine: (offline-docker-509000) DBG | waiting for graceful shutdown
	I0819 11:21:25.445832    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:25.445902    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:25.446814    8531 main.go:141] libmachine: (offline-docker-509000) DBG | waiting for graceful shutdown
	I0819 11:21:26.447684    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:26.447766    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:26.449391    8531 main.go:141] libmachine: (offline-docker-509000) DBG | waiting for graceful shutdown
	I0819 11:21:27.450160    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:27.450230    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:27.450942    8531 main.go:141] libmachine: (offline-docker-509000) DBG | waiting for graceful shutdown
	I0819 11:21:28.452462    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:28.452563    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:28.453180    8531 main.go:141] libmachine: (offline-docker-509000) DBG | waiting for graceful shutdown
	I0819 11:21:29.454216    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:29.454303    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8577
	I0819 11:21:29.455230    8531 main.go:141] libmachine: (offline-docker-509000) DBG | sending sigkill
	I0819 11:21:29.455241    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0819 11:21:29.468592    8531 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:5:cc:b9:32:9e
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:5:cc:b9:32:9e
	I0819 11:21:29.468609    8531 start.go:729] Will try again in 5 seconds ...
	I0819 11:21:29.480147    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:21:29 WARN : hyperkit: failed to read stdout: EOF
	I0819 11:21:29.480166    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:21:29 WARN : hyperkit: failed to read stderr: EOF
	I0819 11:21:34.470718    8531 start.go:360] acquireMachinesLock for offline-docker-509000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:22:27.370096    8531 start.go:364] duration metric: took 52.899136506s to acquireMachinesLock for "offline-docker-509000"
	I0819 11:22:27.370132    8531 start.go:93] Provisioning new machine with config: &{Name:offline-docker-509000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-509000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:22:27.370186    8531 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 11:22:27.391514    8531 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:22:27.391595    8531 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:22:27.391618    8531 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:22:27.400101    8531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53771
	I0819 11:22:27.400462    8531 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:22:27.400763    8531 main.go:141] libmachine: Using API Version  1
	I0819 11:22:27.400772    8531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:22:27.400986    8531 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:22:27.401099    8531 main.go:141] libmachine: (offline-docker-509000) Calling .GetMachineName
	I0819 11:22:27.401209    8531 main.go:141] libmachine: (offline-docker-509000) Calling .DriverName
	I0819 11:22:27.401330    8531 start.go:159] libmachine.API.Create for "offline-docker-509000" (driver="hyperkit")
	I0819 11:22:27.401343    8531 client.go:168] LocalClient.Create starting
	I0819 11:22:27.401371    8531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 11:22:27.401421    8531 main.go:141] libmachine: Decoding PEM data...
	I0819 11:22:27.401431    8531 main.go:141] libmachine: Parsing certificate...
	I0819 11:22:27.401470    8531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 11:22:27.401508    8531 main.go:141] libmachine: Decoding PEM data...
	I0819 11:22:27.401520    8531 main.go:141] libmachine: Parsing certificate...
	I0819 11:22:27.401539    8531 main.go:141] libmachine: Running pre-create checks...
	I0819 11:22:27.401544    8531 main.go:141] libmachine: (offline-docker-509000) Calling .PreCreateCheck
	I0819 11:22:27.401622    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:27.401699    8531 main.go:141] libmachine: (offline-docker-509000) Calling .GetConfigRaw
	I0819 11:22:27.433517    8531 main.go:141] libmachine: Creating machine...
	I0819 11:22:27.433526    8531 main.go:141] libmachine: (offline-docker-509000) Calling .Create
	I0819 11:22:27.433629    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:27.433745    8531 main.go:141] libmachine: (offline-docker-509000) DBG | I0819 11:22:27.433602    8739 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:22:27.433802    8531 main.go:141] libmachine: (offline-docker-509000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:22:27.665130    8531 main.go:141] libmachine: (offline-docker-509000) DBG | I0819 11:22:27.665029    8739 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/id_rsa...
	I0819 11:22:27.727501    8531 main.go:141] libmachine: (offline-docker-509000) DBG | I0819 11:22:27.727414    8739 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/offline-docker-509000.rawdisk...
	I0819 11:22:27.727528    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Writing magic tar header
	I0819 11:22:27.727550    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Writing SSH key tar header
	I0819 11:22:27.728052    8531 main.go:141] libmachine: (offline-docker-509000) DBG | I0819 11:22:27.728016    8739 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000 ...
	I0819 11:22:28.101963    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:28.101998    8531 main.go:141] libmachine: (offline-docker-509000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/hyperkit.pid
	I0819 11:22:28.102012    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Using UUID ffd9d285-5836-47ea-81c3-fce643d1763e
	I0819 11:22:28.127179    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Generated MAC 32:52:7c:fc:d5:40
	I0819 11:22:28.127201    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-509000
	I0819 11:22:28.127231    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ffd9d285-5836-47ea-81c3-fce643d1763e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0819 11:22:28.127260    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ffd9d285-5836-47ea-81c3-fce643d1763e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0819 11:22:28.127304    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ffd9d285-5836-47ea-81c3-fce643d1763e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/offline-docker-509000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/bzimage,
/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-509000"}
	I0819 11:22:28.127352    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ffd9d285-5836-47ea-81c3-fce643d1763e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/offline-docker-509000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machi
nes/offline-docker-509000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-509000"
	I0819 11:22:28.127369    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 11:22:28.130500    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 DEBUG: hyperkit: Pid is 8740
	I0819 11:22:28.130923    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 0
	I0819 11:22:28.130941    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:28.131020    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:28.131967    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:28.132030    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:28.132048    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:28.132061    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:28.132074    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:28.132086    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:28.132110    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:28.132140    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:28.132153    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:28.132168    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:28.132185    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:28.132200    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:28.132220    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:28.132236    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:28.132255    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:28.132268    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:28.132282    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:28.132296    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:28.132316    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:28.138087    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 11:22:28.146154    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/offline-docker-509000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 11:22:28.147169    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:22:28.147192    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:22:28.147205    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:22:28.147218    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:22:28.521467    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 11:22:28.521482    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 11:22:28.636142    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:22:28.636164    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:22:28.636177    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:22:28.636189    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:22:28.637009    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 11:22:28.637019    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 11:22:30.134222    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 1
	I0819 11:22:30.134238    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:30.134323    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:30.135115    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:30.135178    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:30.135191    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:30.135205    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:30.135212    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:30.135221    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:30.135243    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:30.135252    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:30.135260    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:30.135271    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:30.135279    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:30.135288    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:30.135304    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:30.135316    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:30.135326    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:30.135334    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:30.135341    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:30.135352    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:30.135374    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:32.135853    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 2
	I0819 11:22:32.135868    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:32.135962    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:32.136765    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:32.136822    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:32.136835    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:32.136851    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:32.136861    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:32.136868    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:32.136875    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:32.136894    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:32.136905    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:32.136921    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:32.136938    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:32.136948    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:32.136954    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:32.136967    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:32.136981    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:32.136989    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:32.136998    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:32.137005    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:32.137013    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:34.017942    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 11:22:34.018163    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 11:22:34.018172    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 11:22:34.038670    8531 main.go:141] libmachine: (offline-docker-509000) DBG | 2024/08/19 11:22:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 11:22:34.137637    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 3
	I0819 11:22:34.137661    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:34.137868    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:34.139335    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:34.139454    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:34.139482    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:34.139521    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:34.139536    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:34.139553    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:34.139570    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:34.139589    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:34.139626    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:34.139641    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:34.139653    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:34.139663    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:34.139697    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:34.139726    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:34.139739    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:34.139751    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:34.139763    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:34.139773    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:34.139783    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:36.140029    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 4
	I0819 11:22:36.140045    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:36.140154    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:36.140927    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:36.141004    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:36.141018    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:36.141025    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:36.141031    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:36.141040    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:36.141048    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:36.141055    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:36.141068    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:36.141077    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:36.141084    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:36.141092    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:36.141099    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:36.141110    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:36.141119    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:36.141128    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:36.141138    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:36.141147    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:36.141157    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:38.143209    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 5
	I0819 11:22:38.143223    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:38.143273    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:38.144089    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:38.144123    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:38.144136    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:38.144145    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:38.144168    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:38.144179    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:38.144187    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:38.144196    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:38.144210    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:38.144223    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:38.144240    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:38.144253    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:38.144272    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:38.144281    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:38.144289    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:38.144296    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:38.144302    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:38.144310    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:38.144318    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:40.146348    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 6
	I0819 11:22:40.146364    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:40.146412    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:40.147239    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:40.147293    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:40.147307    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:40.147320    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:40.147327    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:40.147334    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:40.147341    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:40.147349    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:40.147366    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:40.147384    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:40.147396    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:40.147404    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:40.147412    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:40.147419    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:40.147426    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:40.147437    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:40.147444    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:40.147451    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:40.147459    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:42.147859    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 7
	I0819 11:22:42.147873    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:42.147943    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:42.148739    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:42.148787    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:42.148799    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:42.148830    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:42.148842    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:42.148850    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:42.148858    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:42.148870    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:42.148883    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:42.148891    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:42.148900    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:42.148916    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:42.148934    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:42.148949    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:42.148959    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:42.148967    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:42.148975    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:42.148983    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:42.148991    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:44.150994    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 8
	I0819 11:22:44.151009    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:44.151114    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:44.151944    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:44.152001    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:44.152017    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:44.152030    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:44.152045    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:44.152054    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:44.152060    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:44.152076    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:44.152084    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:44.152091    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:44.152100    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:44.152112    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:44.152122    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:44.152136    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:44.152145    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:44.152152    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:44.152161    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:44.152173    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:44.152181    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:46.154203    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 9
	I0819 11:22:46.154218    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:46.154274    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:46.155127    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:46.155139    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:46.155149    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:46.155164    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:46.155173    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:46.155181    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:46.155187    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:46.155199    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:46.155212    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:46.155221    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:46.155229    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:46.155236    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:46.155244    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:46.155251    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:46.155259    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:46.155266    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:46.155276    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:46.155285    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:46.155291    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:48.155443    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 10
	I0819 11:22:48.155460    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:48.155520    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:48.156301    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:48.156355    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:48.156370    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:48.156380    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:48.156387    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:48.156395    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:48.156404    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:48.156421    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:48.156430    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:48.156437    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:48.156444    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:48.156450    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:48.156462    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:48.156471    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:48.156479    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:48.156486    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:48.156494    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:48.156501    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:48.156509    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:50.158584    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 11
	I0819 11:22:50.158600    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:50.158622    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:50.159519    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:50.159569    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:50.159578    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:50.159585    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:50.159599    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:50.159606    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:50.159617    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:50.159637    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:50.159648    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:50.159654    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:50.159661    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:50.159667    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:50.159674    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:50.159688    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:50.159703    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:50.159711    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:50.159719    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:50.159741    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:50.159752    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:52.160286    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 12
	I0819 11:22:52.160303    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:52.160377    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:52.161211    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:52.161256    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:52.161266    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:52.161285    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:52.161295    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:52.161302    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:52.161309    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:52.161315    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:52.161333    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:52.161343    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:52.161352    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:52.161360    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:52.161367    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:52.161373    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:52.161380    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:52.161389    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:52.161405    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:52.161413    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:52.161433    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:54.163012    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 13
	I0819 11:22:54.163028    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:54.163087    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:54.164106    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:54.164162    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:54.164172    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:54.164205    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:54.164214    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:54.164221    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:54.164229    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:54.164250    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:54.164271    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:54.164281    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:54.164289    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:54.164297    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:54.164304    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:54.164312    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:54.164321    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:54.164329    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:54.164336    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:54.164344    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:54.164354    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:56.165736    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 14
	I0819 11:22:56.165748    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:56.165883    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:56.166692    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:56.166754    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:56.166765    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:56.166774    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:56.166784    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:56.166792    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:56.166798    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:56.166805    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:56.166813    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:56.166828    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:56.166842    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:56.166851    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:56.166858    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:56.166873    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:56.166882    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:56.166896    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:56.166908    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:56.166919    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:56.166930    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:58.168879    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 15
	I0819 11:22:58.168891    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:58.168982    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:22:58.169777    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:22:58.169825    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:58.169841    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:58.169854    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:58.169860    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:58.169868    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:58.169876    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:58.169882    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:58.169891    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:58.169898    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:58.169903    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:58.169910    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:58.169917    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:58.169926    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:58.169946    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:58.169959    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:58.169980    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:58.169992    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:58.170001    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:00.171449    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 16
	I0819 11:23:00.171464    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:00.171519    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:00.172362    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:00.172424    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:00.172432    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:00.172442    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:00.172450    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:00.172478    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:00.172485    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:00.172506    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:00.172515    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:00.172534    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:00.172544    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:00.172553    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:00.172563    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:00.172575    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:00.172582    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:00.172596    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:00.172605    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:00.172613    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:00.172621    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:02.173593    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 17
	I0819 11:23:02.173608    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:02.173658    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:02.174464    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:02.174513    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:02.174523    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:02.174535    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:02.174544    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:02.174560    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:02.174567    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:02.174575    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:02.174587    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:02.174595    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:02.174601    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:02.174621    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:02.174632    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:02.174642    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:02.174651    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:02.174658    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:02.174667    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:02.174673    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:02.174681    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:04.176719    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 18
	I0819 11:23:04.176731    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:04.176786    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:04.177605    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:04.177661    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:04.177675    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:04.177686    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:04.177699    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:04.177707    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:04.177714    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:04.177731    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:04.177742    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:04.177751    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:04.177760    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:04.177767    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:04.177775    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:04.177782    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:04.177790    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:04.177799    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:04.177807    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:04.177814    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:04.177821    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:06.179826    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 19
	I0819 11:23:06.179840    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:06.179908    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:06.180728    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:06.180773    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:06.180782    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:06.180794    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:06.180802    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:06.180825    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:06.180841    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:06.180850    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:06.180855    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:06.180862    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:06.180870    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:06.180882    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:06.180905    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:06.180920    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:06.180929    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:06.180937    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:06.180951    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:06.180960    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:06.180970    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:08.182974    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 20
	I0819 11:23:08.182990    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:08.183035    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:08.183907    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:08.183958    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:08.183968    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:08.183981    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:08.183988    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:08.183996    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:08.184006    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:08.184022    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:08.184035    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:08.184047    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:08.184054    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:08.184060    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:08.184068    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:08.184076    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:08.184083    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:08.184091    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:08.184097    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:08.184104    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:08.184112    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:10.183625    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 21
	I0819 11:23:10.183642    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:10.183755    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:10.184552    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:10.184602    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:10.184614    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:10.184637    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:10.184644    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:10.184655    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:10.184663    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:10.184669    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:10.184676    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:10.184686    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:10.184708    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:10.184719    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:10.184727    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:10.184734    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:10.184743    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:10.184766    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:10.184775    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:10.184783    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:10.184791    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:12.182136    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 22
	I0819 11:23:12.182151    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:12.182211    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:12.182994    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:12.183045    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:12.183056    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:12.183073    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:12.183082    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:12.183097    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:12.183107    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:12.183115    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:12.183122    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:12.183129    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:12.183137    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:12.183146    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:12.183154    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:12.183161    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:12.183169    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:12.183176    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:12.183181    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:12.183195    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:12.183209    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:14.181980    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 23
	I0819 11:23:14.181993    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:14.182056    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:14.182852    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:14.182898    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:14.182911    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:14.182925    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:14.182935    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:14.182944    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:14.182950    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:14.182958    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:14.182966    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:14.182972    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:14.182979    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:14.182985    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:14.182994    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:14.183005    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:14.183011    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:14.183018    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:14.183031    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:14.183039    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:14.183044    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:16.181682    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 24
	I0819 11:23:16.181695    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:16.181764    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:16.182594    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:16.182643    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:16.182659    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:16.182695    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:16.182708    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:16.182716    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:16.182724    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:16.182730    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:16.182740    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:16.182750    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:16.182756    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:16.182765    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:16.182773    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:16.182788    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:16.182797    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:16.182804    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:16.182812    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:16.182818    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:16.182827    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:18.180727    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 25
	I0819 11:23:18.180744    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:18.180862    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:18.181685    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:18.181748    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:18.181761    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:18.181783    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:18.181795    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:18.181814    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:18.181826    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:18.181834    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:18.181843    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:18.181877    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:18.181892    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:18.181900    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:18.181908    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:18.181925    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:18.181936    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:18.181953    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:18.181963    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:18.181970    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:18.181978    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:20.181680    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 26
	I0819 11:23:20.181697    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:20.181740    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:20.182735    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:20.182779    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:20.182790    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:20.182824    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:20.182837    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:20.182850    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:20.182856    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:20.182871    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:20.182884    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:20.182901    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:20.182913    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:20.182921    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:20.182928    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:20.182941    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:20.182949    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:20.182956    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:20.182964    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:20.182971    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:20.182977    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:22.181977    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 27
	I0819 11:23:22.181993    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:22.182046    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:22.182854    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:22.182909    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:22.182925    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:22.182941    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:22.182950    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:22.182957    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:22.182966    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:22.182983    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:22.182993    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:22.183000    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:22.183009    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:22.183023    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:22.183033    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:22.183049    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:22.183059    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:22.183068    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:22.183076    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:22.183082    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:22.183090    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:24.183409    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 28
	I0819 11:23:24.183421    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:24.183481    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:24.184363    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:24.184409    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:24.184426    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:24.184445    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:24.184460    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:24.184468    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:24.184477    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:24.184488    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:24.184496    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:24.184503    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:24.184511    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:24.184518    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:24.184526    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:24.184533    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:24.184540    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:24.184555    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:24.184564    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:24.184572    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:24.184580    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:26.183088    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Attempt 29
	I0819 11:23:26.183103    8531 main.go:141] libmachine: (offline-docker-509000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:26.183176    8531 main.go:141] libmachine: (offline-docker-509000) DBG | hyperkit pid from json: 8740
	I0819 11:23:26.183968    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Searching for 32:52:7c:fc:d5:40 in /var/db/dhcpd_leases ...
	I0819 11:23:26.184018    8531 main.go:141] libmachine: (offline-docker-509000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:26.184029    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:26.184046    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:26.184056    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:26.184068    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:26.184080    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:26.184090    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:26.184096    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:26.184102    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:26.184109    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:26.184118    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:26.184125    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:26.184133    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:26.184140    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:26.184148    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:26.184154    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:26.184162    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:26.184170    8531 main.go:141] libmachine: (offline-docker-509000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:28.183165    8531 client.go:171] duration metric: took 1m0.803842886s to LocalClient.Create
	I0819 11:23:30.184120    8531 start.go:128] duration metric: took 1m2.837123088s to createHost
	I0819 11:23:30.184137    8531 start.go:83] releasing machines lock for "offline-docker-509000", held for 1m2.837222876s
	W0819 11:23:30.184233    8531 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-509000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:52:7c:fc:d5:40
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-509000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:52:7c:fc:d5:40
	I0819 11:23:30.226452    8531 out.go:201] 
	W0819 11:23:30.268376    8531 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:52:7c:fc:d5:40
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:52:7c:fc:d5:40
	W0819 11:23:30.268389    8531 out.go:270] * 
	* 
	W0819 11:23:30.269119    8531 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:23:30.331364    8531 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-509000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-19 11:23:30.442861 -0700 PDT m=+5514.204244193
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-509000 -n offline-docker-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-509000 -n offline-docker-509000: exit status 7 (82.859219ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 11:23:30.523738    8762 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0819 11:23:30.523762    8762 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-509000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-509000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-509000
E0819 11:23:32.136691    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-509000: (5.246609531s)
--- FAIL: TestOffline (195.52s)

                                                
                                    
x
+
TestCertOptions (252.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-472000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0819 11:30:03.987140    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:30:29.035036    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:30:31.704115    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:30:43.441628    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-472000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.341449809s)

                                                
                                                
-- stdout --
	* [cert-options-472000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-472000" primary control-plane node in "cert-options-472000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-472000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:bd:24:c5:b5:59
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-472000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:6c:e2:c4:49:d0
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:6c:e2:c4:49:d0
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-472000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-472000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-472000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (162.959365ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-472000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-472000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-472000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-472000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-472000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (162.737262ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-472000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-472000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-472000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-19 11:32:57.72734 -0700 PDT m=+6081.500159292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-472000 -n cert-options-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-472000 -n cert-options-472000: exit status 7 (78.485651ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 11:32:57.804151    9285 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0819 11:32:57.804175    9285 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-472000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-472000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-472000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-472000: (5.23583646s)
--- FAIL: TestCertOptions (252.02s)

                                                
                                    
x
+
TestDockerFlags (252.12s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-328000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0819 11:25:03.986905    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:03.994520    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:04.006718    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:04.028224    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:04.070922    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:04.154169    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:04.317533    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:04.639879    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:05.283235    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:06.566655    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:09.130142    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:14.253573    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:24.495605    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:29.037533    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:43.443121    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:25:44.977983    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:26:25.941011    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-328000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.379707561s)

                                                
                                                
-- stdout --
	* [docker-flags-328000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-328000" primary control-plane node in "docker-flags-328000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-328000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:24:38.960286    8807 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:24:38.960548    8807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:24:38.960554    8807 out.go:358] Setting ErrFile to fd 2...
	I0819 11:24:38.960558    8807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:24:38.960710    8807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 11:24:38.962255    8807 out.go:352] Setting JSON to false
	I0819 11:24:38.984896    8807 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6848,"bootTime":1724085030,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 11:24:38.984999    8807 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:24:39.006673    8807 out.go:177] * [docker-flags-328000] minikube v1.33.1 on Darwin 14.6.1
	I0819 11:24:39.049458    8807 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 11:24:39.049490    8807 notify.go:220] Checking for updates...
	I0819 11:24:39.093425    8807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 11:24:39.114422    8807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 11:24:39.135285    8807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:24:39.156446    8807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:24:39.177608    8807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:24:39.198898    8807 config.go:182] Loaded profile config "force-systemd-flag-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:24:39.198993    8807 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:24:39.228491    8807 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 11:24:39.275221    8807 start.go:297] selected driver: hyperkit
	I0819 11:24:39.275247    8807 start.go:901] validating driver "hyperkit" against <nil>
	I0819 11:24:39.275263    8807 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:24:39.278238    8807 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:24:39.278358    8807 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 11:24:39.286821    8807 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 11:24:39.290728    8807 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:24:39.290752    8807 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 11:24:39.290785    8807 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:24:39.291008    8807 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0819 11:24:39.291067    8807 cni.go:84] Creating CNI manager for ""
	I0819 11:24:39.291083    8807 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:24:39.291088    8807 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:24:39.291160    8807 start.go:340] cluster config:
	{Name:docker-flags-328000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:24:39.291249    8807 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:24:39.333213    8807 out.go:177] * Starting "docker-flags-328000" primary control-plane node in "docker-flags-328000" cluster
	I0819 11:24:39.354049    8807 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:24:39.354086    8807 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 11:24:39.354103    8807 cache.go:56] Caching tarball of preloaded images
	I0819 11:24:39.354215    8807 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 11:24:39.354226    8807 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:24:39.354306    8807 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/docker-flags-328000/config.json ...
	I0819 11:24:39.354325    8807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/docker-flags-328000/config.json: {Name:mk032ffc19f4d96028e6ffe32b19104648adf381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:24:39.354624    8807 start.go:360] acquireMachinesLock for docker-flags-328000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:25:36.267165    8807 start.go:364] duration metric: took 56.912933669s to acquireMachinesLock for "docker-flags-328000"
	I0819 11:25:36.267203    8807 start.go:93] Provisioning new machine with config: &{Name:docker-flags-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:25:36.267274    8807 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 11:25:36.288584    8807 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:25:36.288731    8807 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:25:36.288769    8807 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:25:36.297224    8807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53805
	I0819 11:25:36.297608    8807 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:25:36.298026    8807 main.go:141] libmachine: Using API Version  1
	I0819 11:25:36.298040    8807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:25:36.298258    8807 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:25:36.298378    8807 main.go:141] libmachine: (docker-flags-328000) Calling .GetMachineName
	I0819 11:25:36.298473    8807 main.go:141] libmachine: (docker-flags-328000) Calling .DriverName
	I0819 11:25:36.298606    8807 start.go:159] libmachine.API.Create for "docker-flags-328000" (driver="hyperkit")
	I0819 11:25:36.298632    8807 client.go:168] LocalClient.Create starting
	I0819 11:25:36.298668    8807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 11:25:36.298720    8807 main.go:141] libmachine: Decoding PEM data...
	I0819 11:25:36.298734    8807 main.go:141] libmachine: Parsing certificate...
	I0819 11:25:36.298792    8807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 11:25:36.298829    8807 main.go:141] libmachine: Decoding PEM data...
	I0819 11:25:36.298841    8807 main.go:141] libmachine: Parsing certificate...
	I0819 11:25:36.298853    8807 main.go:141] libmachine: Running pre-create checks...
	I0819 11:25:36.298861    8807 main.go:141] libmachine: (docker-flags-328000) Calling .PreCreateCheck
	I0819 11:25:36.298960    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:36.299142    8807 main.go:141] libmachine: (docker-flags-328000) Calling .GetConfigRaw
	I0819 11:25:36.351449    8807 main.go:141] libmachine: Creating machine...
	I0819 11:25:36.351493    8807 main.go:141] libmachine: (docker-flags-328000) Calling .Create
	I0819 11:25:36.351588    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:36.351737    8807 main.go:141] libmachine: (docker-flags-328000) DBG | I0819 11:25:36.351585    8825 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:25:36.351809    8807 main.go:141] libmachine: (docker-flags-328000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:25:36.558504    8807 main.go:141] libmachine: (docker-flags-328000) DBG | I0819 11:25:36.558434    8825 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/id_rsa...
	I0819 11:25:36.625988    8807 main.go:141] libmachine: (docker-flags-328000) DBG | I0819 11:25:36.625915    8825 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/docker-flags-328000.rawdisk...
	I0819 11:25:36.626002    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Writing magic tar header
	I0819 11:25:36.626013    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Writing SSH key tar header
	I0819 11:25:36.626582    8807 main.go:141] libmachine: (docker-flags-328000) DBG | I0819 11:25:36.626544    8825 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000 ...
	I0819 11:25:37.004657    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:37.004674    8807 main.go:141] libmachine: (docker-flags-328000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/hyperkit.pid
	I0819 11:25:37.004706    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Using UUID 2281cdb5-34bb-4dad-93e3-33c2549c6859
	I0819 11:25:37.032623    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Generated MAC 62:ed:a:ed:3d:7a
	I0819 11:25:37.032639    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-328000
	I0819 11:25:37.032674    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2281cdb5-34bb-4dad-93e3-33c2549c6859", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000122330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0819 11:25:37.032700    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2281cdb5-34bb-4dad-93e3-33c2549c6859", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000122330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0819 11:25:37.032770    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2281cdb5-34bb-4dad-93e3-33c2549c6859", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/docker-flags-328000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/bzimage,/Users/jenkins/m
inikube-integration/19478-1622/.minikube/machines/docker-flags-328000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-328000"}
	I0819 11:25:37.032819    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2281cdb5-34bb-4dad-93e3-33c2549c6859 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/docker-flags-328000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags
-328000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-328000"
	I0819 11:25:37.032844    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 11:25:37.035872    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 DEBUG: hyperkit: Pid is 8826
	I0819 11:25:37.036372    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 0
	I0819 11:25:37.036385    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:37.036441    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:37.037437    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:37.037528    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:37.037547    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:37.037574    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:37.037604    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:37.037619    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:37.037693    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:37.037721    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:37.037734    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:37.037749    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:37.037780    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:37.037798    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:37.037813    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:37.037826    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:37.037837    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:37.037852    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:37.037864    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:37.037880    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:37.037892    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:37.043802    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 11:25:37.051727    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 11:25:37.052622    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:25:37.052644    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:25:37.052651    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:25:37.052660    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:25:37.428092    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 11:25:37.428116    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 11:25:37.542622    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:25:37.542647    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:25:37.542671    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:25:37.542710    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:25:37.543541    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 11:25:37.543557    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 11:25:39.038817    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 1
	I0819 11:25:39.038830    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:39.038949    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:39.039750    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:39.039796    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:39.039804    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:39.039813    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:39.039821    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:39.039829    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:39.039835    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:39.039841    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:39.039849    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:39.039856    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:39.039864    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:39.039872    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:39.039878    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:39.039885    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:39.039891    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:39.039899    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:39.039906    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:39.039914    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:39.039924    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:41.040023    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 2
	I0819 11:25:41.040039    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:41.040070    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:41.040885    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:41.040936    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:41.040945    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:41.040955    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:41.040964    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:41.040973    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:41.040981    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:41.040999    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:41.041010    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:41.041020    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:41.041029    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:41.041035    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:41.041042    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:41.041049    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:41.041066    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:41.041079    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:41.041092    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:41.041101    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:41.041109    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:42.929137    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:42 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 11:25:42.929264    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:42 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 11:25:42.929277    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:42 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 11:25:42.949489    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:25:42 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 11:25:43.041320    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 3
	I0819 11:25:43.041345    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:43.041481    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:43.042598    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:43.042680    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:43.042692    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:43.042706    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:43.042715    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:43.042724    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:43.042734    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:43.042743    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:43.042767    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:43.042778    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:43.042790    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:43.042799    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:43.042807    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:43.042825    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:43.042851    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:43.042863    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:43.042874    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:43.042885    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:43.042896    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:45.043120    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 4
	I0819 11:25:45.043135    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:45.043225    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:45.044036    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:45.044084    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:45.044094    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:45.044107    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:45.044117    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:45.044128    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:45.044137    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:45.044150    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:45.044165    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:45.044185    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:45.044198    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:45.044207    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:45.044215    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:45.044222    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:45.044229    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:45.044236    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:45.044244    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:45.044249    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:45.044259    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:47.045183    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 5
	I0819 11:25:47.045197    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:47.045278    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:47.046069    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:47.046114    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:47.046125    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:47.046135    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:47.046142    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:47.046158    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:47.046170    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:47.046177    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:47.046187    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:47.046193    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:47.046209    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:47.046218    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:47.046227    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:47.046236    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:47.046243    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:47.046251    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:47.046263    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:47.046281    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:47.046334    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:49.048308    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 6
	I0819 11:25:49.048322    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:49.048420    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:49.049248    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:49.049282    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:49.049297    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:49.049309    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:49.049318    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:49.049328    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:49.049337    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:49.049344    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:49.049349    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:49.049373    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:49.049395    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:49.049404    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:49.049419    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:49.049430    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:49.049438    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:49.049447    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:49.049453    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:49.049462    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:49.049478    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:51.051487    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 7
	I0819 11:25:51.051499    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:51.051556    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:51.052530    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:51.052573    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:51.052583    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:51.052595    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:51.052604    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:51.052611    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:51.052626    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:51.052634    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:51.052641    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:51.052647    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:51.052654    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:51.052661    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:51.052668    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:51.052674    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:51.052682    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:51.052689    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:51.052696    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:51.052703    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:51.052722    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:53.053197    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 8
	I0819 11:25:53.053210    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:53.053219    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:53.054274    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:53.054299    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:53.054311    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:53.054330    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:53.054343    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:53.054355    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:53.054364    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:53.054372    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:53.054380    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:53.054387    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:53.054395    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:53.054413    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:53.054426    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:53.054437    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:53.054444    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:53.054452    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:53.054462    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:53.054486    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:53.054498    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:55.054903    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 9
	I0819 11:25:55.054914    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:55.054982    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:55.055795    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:55.055851    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:55.055862    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:55.055878    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:55.055889    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:55.055905    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:55.055914    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:55.055932    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:55.055938    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:55.055945    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:55.055953    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:55.055961    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:55.055969    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:55.055976    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:55.055983    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:55.055992    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:55.056000    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:55.056007    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:55.056013    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:57.056404    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 10
	I0819 11:25:57.056418    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:57.056547    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:57.057402    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:57.057443    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:57.057456    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:57.057475    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:57.057485    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:57.057492    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:57.057498    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:57.057504    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:57.057509    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:57.057516    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:57.057521    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:57.057539    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:57.057553    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:57.057563    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:57.057571    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:57.057579    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:57.057587    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:57.057601    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:57.057614    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:59.058086    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 11
	I0819 11:25:59.058116    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:59.058175    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:25:59.058996    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:25:59.059044    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:59.059059    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:59.059079    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:59.059097    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:59.059108    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:59.059125    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:59.059133    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:59.059146    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:59.059159    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:59.059168    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:59.059176    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:59.059183    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:59.059189    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:59.059195    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:59.059207    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:59.059219    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:59.059232    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:59.059241    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:01.061236    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 12
	I0819 11:26:01.061252    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:01.061308    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:01.062116    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:01.062160    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:01.062170    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:01.062197    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:01.062209    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:01.062218    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:01.062227    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:01.062235    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:01.062242    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:01.062253    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:01.062261    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:01.062268    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:01.062276    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:01.062285    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:01.062293    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:01.062305    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:01.062313    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:01.062325    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:01.062333    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:03.064372    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 13
	I0819 11:26:03.064387    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:03.064455    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:03.065542    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:03.065591    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:03.065602    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:03.065623    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:03.065633    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:03.065642    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:03.065651    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:03.065659    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:03.065681    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:03.065695    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:03.065705    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:03.065714    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:03.065722    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:03.065734    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:03.065742    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:03.065750    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:03.065758    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:03.065764    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:03.065773    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:05.066064    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 14
	I0819 11:26:05.066075    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:05.066142    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:05.066976    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:05.067042    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:05.067057    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:05.067065    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:05.067072    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:05.067081    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:05.067089    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:05.067102    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:05.067113    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:05.067123    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:05.067131    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:05.067139    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:05.067147    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:05.067154    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:05.067162    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:05.067169    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:05.067174    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:05.067187    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:05.067200    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:07.069222    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 15
	I0819 11:26:07.069237    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:07.069281    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:07.070065    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:07.070135    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:07.070146    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:07.070161    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:07.070172    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:07.070179    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:07.070185    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:07.070194    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:07.070201    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:07.070208    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:07.070215    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:07.070223    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:07.070230    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:07.070239    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:07.070250    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:07.070258    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:07.070266    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:07.070274    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:07.070291    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:09.072309    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 16
	I0819 11:26:09.072323    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:09.072363    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:09.073251    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:09.073293    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:09.073303    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:09.073314    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:09.073321    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:09.073343    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:09.073357    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:09.073373    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:09.073388    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:09.073404    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:09.073413    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:09.073420    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:09.073428    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:09.073450    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:09.073462    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:09.073470    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:09.073482    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:09.073489    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:09.073498    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:11.075466    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 17
	I0819 11:26:11.075478    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:11.075523    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:11.076531    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:11.076581    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:11.076597    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:11.076605    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:11.076612    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:11.076620    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:11.076627    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:11.076641    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:11.076655    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:11.076672    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:11.076698    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:11.076709    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:11.076716    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:11.076725    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:11.076732    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:11.076740    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:11.076746    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:11.076754    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:11.076771    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:13.078721    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 18
	I0819 11:26:13.078736    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:13.078780    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:13.079562    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:13.079618    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:13.079628    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:13.079639    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:13.079650    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:13.079666    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:13.079679    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:13.079692    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:13.079702    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:13.079710    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:13.079718    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:13.079731    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:13.079743    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:13.079753    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:13.079758    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:13.079785    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:13.079798    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:13.079813    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:13.079827    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:15.081802    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 19
	I0819 11:26:15.081815    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:15.081865    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:15.082689    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:15.082747    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:15.082757    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:15.082767    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:15.082773    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:15.082779    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:15.082792    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:15.082799    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:15.082805    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:15.082815    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:15.082828    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:15.082840    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:15.082855    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:15.082865    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:15.082874    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:15.082881    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:15.082889    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:15.082895    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:15.082904    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:17.084886    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 20
	I0819 11:26:17.084898    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:17.084964    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:17.085757    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:17.085813    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:17.085824    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:17.085832    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:17.085841    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:17.085856    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:17.085864    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:17.085871    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:17.085878    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:17.085892    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:17.085905    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:17.085912    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:17.085920    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:17.085927    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:17.085943    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:17.085951    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:17.085959    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:17.085966    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:17.085975    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:19.087463    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 21
	I0819 11:26:19.087474    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:19.087550    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:19.088547    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:19.088579    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:19.088599    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:19.088622    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:19.088640    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:19.088649    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:19.088657    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:19.088673    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:19.088686    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:19.088694    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:19.088700    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:19.088707    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:19.088714    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:19.088722    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:19.088730    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:19.088745    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:19.088753    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:19.088761    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:19.088769    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:21.090836    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 22
	I0819 11:26:21.090851    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:21.090894    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:21.091824    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:21.091865    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:21.091872    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:21.091882    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:21.091893    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:21.091904    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:21.091914    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:21.091921    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:21.091927    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:21.091933    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:21.091938    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:21.091946    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:21.091953    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:21.091960    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:21.091967    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:21.091973    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:21.091987    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:21.092001    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:21.092010    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:23.092175    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 23
	I0819 11:26:23.092187    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:23.092224    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:23.093034    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:23.093101    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:23.093111    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:23.093119    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:23.093125    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:23.093137    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:23.093151    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:23.093159    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:23.093170    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:23.093180    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:23.093187    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:23.093194    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:23.093205    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:23.093217    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:23.093226    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:23.093235    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:23.093249    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:23.093263    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:23.093273    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:25.094161    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 24
	I0819 11:26:25.094173    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:25.094249    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:25.095086    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:25.095116    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:25.095130    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:25.095148    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:25.095155    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:25.095162    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:25.095170    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:25.095178    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:25.095187    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:25.095194    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:25.095202    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:25.095209    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:25.095215    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:25.095222    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:25.095230    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:25.095243    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:25.095254    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:25.095262    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:25.095268    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:27.096643    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 25
	I0819 11:26:27.096657    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:27.096708    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:27.097535    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:27.097593    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:27.097603    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:27.097611    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:27.097620    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:27.097628    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:27.097638    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:27.097660    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:27.097676    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:27.097688    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:27.097696    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:27.097705    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:27.097712    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:27.097721    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:27.097727    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:27.097736    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:27.097749    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:27.097761    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:27.097772    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:29.099772    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 26
	I0819 11:26:29.099783    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:29.099829    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:29.100674    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:29.100736    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:29.100746    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:29.100757    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:29.100764    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:29.100780    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:29.100789    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:29.100797    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:29.100804    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:29.100814    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:29.100820    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:29.100825    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:29.100840    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:29.100848    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:29.100860    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:29.100868    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:29.100874    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:29.100881    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:29.100888    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:31.102904    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 27
	I0819 11:26:31.102919    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:31.102986    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:31.103793    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:31.103846    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:31.103857    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:31.103865    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:31.103870    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:31.103897    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:31.103918    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:31.103930    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:31.103939    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:31.103947    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:31.103955    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:31.103962    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:31.103969    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:31.103991    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:31.104004    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:31.104020    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:31.104033    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:31.104041    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:31.104050    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:33.104221    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 28
	I0819 11:26:33.104236    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:33.104294    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:33.105367    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:33.105389    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:33.105402    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:33.105412    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:33.105419    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:33.105426    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:33.105436    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:33.105456    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:33.105466    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:33.105473    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:33.105481    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:33.105493    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:33.105501    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:33.105508    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:33.105516    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:33.105524    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:33.105531    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:33.105545    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:33.105560    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:35.106878    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 29
	I0819 11:26:35.106894    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:35.107024    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:35.107861    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for 62:ed:a:ed:3d:7a in /var/db/dhcpd_leases ...
	I0819 11:26:35.107907    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:35.107918    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:35.107927    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:35.107935    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:35.107948    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:35.107955    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:35.107962    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:35.107968    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:35.107976    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:35.107984    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:35.107993    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:35.108000    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:35.108008    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:35.108027    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:35.108052    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:35.108100    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:35.108111    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:35.108122    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:37.110123    8807 client.go:171] duration metric: took 1m0.811819624s to LocalClient.Create
	I0819 11:26:39.111346    8807 start.go:128] duration metric: took 1m2.844413052s to createHost
	I0819 11:26:39.111363    8807 start.go:83] releasing machines lock for "docker-flags-328000", held for 1m2.844537773s
	W0819 11:26:39.111377    8807 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:ed:a:ed:3d:7a
	I0819 11:26:39.111707    8807 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:26:39.111724    8807 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:26:39.120273    8807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53807
	I0819 11:26:39.120603    8807 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:26:39.120933    8807 main.go:141] libmachine: Using API Version  1
	I0819 11:26:39.120944    8807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:26:39.121209    8807 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:26:39.121604    8807 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:26:39.121645    8807 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:26:39.130029    8807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53809
	I0819 11:26:39.130372    8807 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:26:39.130759    8807 main.go:141] libmachine: Using API Version  1
	I0819 11:26:39.130782    8807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:26:39.131016    8807 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:26:39.131133    8807 main.go:141] libmachine: (docker-flags-328000) Calling .GetState
	I0819 11:26:39.131220    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:39.131302    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:39.132251    8807 main.go:141] libmachine: (docker-flags-328000) Calling .DriverName
	I0819 11:26:39.174581    8807 out.go:177] * Deleting "docker-flags-328000" in hyperkit ...
	I0819 11:26:39.195670    8807 main.go:141] libmachine: (docker-flags-328000) Calling .Remove
	I0819 11:26:39.195797    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:39.195807    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:39.195875    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:39.196872    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:39.196922    8807 main.go:141] libmachine: (docker-flags-328000) DBG | waiting for graceful shutdown
	I0819 11:26:40.199068    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:40.199121    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:40.200086    8807 main.go:141] libmachine: (docker-flags-328000) DBG | waiting for graceful shutdown
	I0819 11:26:41.200639    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:41.200765    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:41.202493    8807 main.go:141] libmachine: (docker-flags-328000) DBG | waiting for graceful shutdown
	I0819 11:26:42.203046    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:42.203139    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:42.203754    8807 main.go:141] libmachine: (docker-flags-328000) DBG | waiting for graceful shutdown
	I0819 11:26:43.205744    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:43.205837    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:43.206434    8807 main.go:141] libmachine: (docker-flags-328000) DBG | waiting for graceful shutdown
	I0819 11:26:44.206839    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:44.206907    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8826
	I0819 11:26:44.207902    8807 main.go:141] libmachine: (docker-flags-328000) DBG | sending sigkill
	I0819 11:26:44.207919    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:44.219830    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:26:44 WARN : hyperkit: failed to read stderr: EOF
	I0819 11:26:44.219854    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:26:44 WARN : hyperkit: failed to read stdout: EOF
	W0819 11:26:44.234332    8807 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:ed:a:ed:3d:7a
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:ed:a:ed:3d:7a
	I0819 11:26:44.234351    8807 start.go:729] Will try again in 5 seconds ...
	I0819 11:26:49.235304    8807 start.go:360] acquireMachinesLock for docker-flags-328000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:27:41.922936    8807 start.go:364] duration metric: took 52.687895163s to acquireMachinesLock for "docker-flags-328000"
	I0819 11:27:41.922963    8807 start.go:93] Provisioning new machine with config: &{Name:docker-flags-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:27:41.923040    8807 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 11:27:41.986331    8807 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:27:41.986394    8807 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:27:41.986420    8807 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:27:41.995100    8807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53813
	I0819 11:27:41.995449    8807 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:27:41.995796    8807 main.go:141] libmachine: Using API Version  1
	I0819 11:27:41.995815    8807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:27:41.996037    8807 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:27:41.996139    8807 main.go:141] libmachine: (docker-flags-328000) Calling .GetMachineName
	I0819 11:27:41.996228    8807 main.go:141] libmachine: (docker-flags-328000) Calling .DriverName
	I0819 11:27:41.996327    8807 start.go:159] libmachine.API.Create for "docker-flags-328000" (driver="hyperkit")
	I0819 11:27:41.996343    8807 client.go:168] LocalClient.Create starting
	I0819 11:27:41.996369    8807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 11:27:41.996416    8807 main.go:141] libmachine: Decoding PEM data...
	I0819 11:27:41.996428    8807 main.go:141] libmachine: Parsing certificate...
	I0819 11:27:41.996469    8807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 11:27:41.996507    8807 main.go:141] libmachine: Decoding PEM data...
	I0819 11:27:41.996520    8807 main.go:141] libmachine: Parsing certificate...
	I0819 11:27:41.996531    8807 main.go:141] libmachine: Running pre-create checks...
	I0819 11:27:41.996537    8807 main.go:141] libmachine: (docker-flags-328000) Calling .PreCreateCheck
	I0819 11:27:41.996615    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:41.996656    8807 main.go:141] libmachine: (docker-flags-328000) Calling .GetConfigRaw
	I0819 11:27:42.007514    8807 main.go:141] libmachine: Creating machine...
	I0819 11:27:42.007522    8807 main.go:141] libmachine: (docker-flags-328000) Calling .Create
	I0819 11:27:42.007603    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:42.007730    8807 main.go:141] libmachine: (docker-flags-328000) DBG | I0819 11:27:42.007598    8852 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:27:42.007797    8807 main.go:141] libmachine: (docker-flags-328000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:27:42.425210    8807 main.go:141] libmachine: (docker-flags-328000) DBG | I0819 11:27:42.425153    8852 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/id_rsa...
	I0819 11:27:42.600164    8807 main.go:141] libmachine: (docker-flags-328000) DBG | I0819 11:27:42.600117    8852 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/docker-flags-328000.rawdisk...
	I0819 11:27:42.600181    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Writing magic tar header
	I0819 11:27:42.600196    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Writing SSH key tar header
	I0819 11:27:42.600597    8807 main.go:141] libmachine: (docker-flags-328000) DBG | I0819 11:27:42.600557    8852 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000 ...
	I0819 11:27:42.980205    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:42.980221    8807 main.go:141] libmachine: (docker-flags-328000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/hyperkit.pid
	I0819 11:27:42.980288    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Using UUID 7179c478-4695-40e4-9d4d-37a359ceb314
	I0819 11:27:43.005884    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Generated MAC e2:15:d9:19:2f:dc
	I0819 11:27:43.005899    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-328000
	I0819 11:27:43.005931    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7179c478-4695-40e4-9d4d-37a359ceb314", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a5d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0819 11:27:43.005958    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7179c478-4695-40e4-9d4d-37a359ceb314", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a5d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0819 11:27:43.006015    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7179c478-4695-40e4-9d4d-37a359ceb314", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/docker-flags-328000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/bzimage,/Users/jenkins/m
inikube-integration/19478-1622/.minikube/machines/docker-flags-328000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-328000"}
	I0819 11:27:43.006059    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7179c478-4695-40e4-9d4d-37a359ceb314 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/docker-flags-328000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags
-328000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-328000"
	I0819 11:27:43.006077    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 11:27:43.008903    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 DEBUG: hyperkit: Pid is 8866
	I0819 11:27:43.009365    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 0
	I0819 11:27:43.009395    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:43.009459    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:27:43.010394    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:27:43.010470    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:43.010479    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:43.010489    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:43.010505    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:43.010515    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:43.010523    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:43.010532    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:43.010541    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:43.010561    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:43.010577    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:43.010588    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:43.010595    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:43.010616    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:43.010646    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:43.010653    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:43.010661    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:43.010671    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:43.010680    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:43.016512    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 11:27:43.025034    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/docker-flags-328000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 11:27:43.025872    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:27:43.025894    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:27:43.025909    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:27:43.025927    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:27:43.403212    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 11:27:43.403228    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 11:27:43.517771    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:27:43.517787    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:27:43.517799    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:27:43.517807    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:27:43.518678    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 11:27:43.518696    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 11:27:45.011879    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 1
	I0819 11:27:45.011897    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:45.011996    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:27:45.012804    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:27:45.012859    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:45.012869    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:45.012878    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:45.012884    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:45.012891    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:45.012899    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:45.012905    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:45.012912    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:45.012918    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:45.012925    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:45.012948    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:45.012959    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:45.012967    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:45.012976    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:45.013041    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:45.013074    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:45.013081    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:45.013087    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:47.013552    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 2
	I0819 11:27:47.013573    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:47.013630    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:27:47.014537    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:27:47.014620    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:47.014635    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:47.014647    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:47.014669    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:47.014687    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:47.014698    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:47.014709    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:47.014718    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:47.014727    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:47.014734    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:47.014741    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:47.014751    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:47.014760    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:47.014773    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:47.014782    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:47.014792    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:47.014800    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:47.014813    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:48.911011    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:48 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 11:27:48.911158    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:48 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 11:27:48.911168    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:48 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 11:27:48.931016    8807 main.go:141] libmachine: (docker-flags-328000) DBG | 2024/08/19 11:27:48 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 11:27:49.016622    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 3
	I0819 11:27:49.016651    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:49.016810    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:27:49.018588    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:27:49.018661    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:49.018676    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:49.018701    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:49.018719    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:49.018730    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:49.018740    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:49.018749    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:49.018760    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:49.018775    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:49.018786    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:49.018815    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:49.018833    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:49.018850    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:49.018861    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:49.018889    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:49.018907    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:49.018918    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:49.018929    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:51.018858    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 4
	I0819 11:27:51.018872    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:51.018910    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:27:51.019730    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:27:51.019787    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:51.019808    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:51.019821    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:51.019830    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:51.019837    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:51.019858    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:51.019874    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:51.019883    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:51.019890    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:51.019898    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:51.019905    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:51.019911    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:51.019925    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:51.019939    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:51.019955    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:51.019967    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:51.019978    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:51.019993    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:53.022053    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 5
	I0819 11:27:53.022069    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:53.022194    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:27:53.022968    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:27:53.023043    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:53.023054    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:53.023061    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:53.023067    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:53.023075    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:53.023083    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:53.023091    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:53.023096    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:53.023109    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:53.023122    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:53.023130    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:53.023139    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:53.023147    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:53.023153    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:53.023165    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:53.023178    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:53.023186    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:53.023193    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:55.024049    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 6
	I0819 11:27:55.024062    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:55.024182    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:27:55.024979    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:27:55.025006    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:55.025015    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:55.025024    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:55.025033    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:55.025040    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:55.025062    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:55.025078    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:55.025093    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:55.025105    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:55.025119    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:55.025127    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:55.025135    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:55.025144    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:55.025152    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:55.025168    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:55.025185    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:55.025194    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:55.025200    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:57.025748    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 7
	I0819 11:27:57.025760    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:57.025805    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:27:57.026669    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:27:57.026699    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:57.026708    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:57.026717    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:57.026723    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:57.026731    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:57.026738    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:57.026745    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:57.026757    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:57.026764    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:57.026777    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:57.026788    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:57.026796    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:57.026803    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:57.026825    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:57.026837    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:57.026846    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:57.026857    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:57.026863    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:59.028902    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 8
	I0819 11:27:59.028917    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:59.028951    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:27:59.029853    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:27:59.029897    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:59.029906    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:59.029926    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:59.029936    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:59.029953    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:59.029961    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:59.029987    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:59.030000    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:59.030012    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:59.030021    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:59.030037    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:59.030050    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:59.030068    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:59.030078    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:59.030086    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:59.030094    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:59.030105    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:59.030115    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:01.032099    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 9
	I0819 11:28:01.032115    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:01.032190    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:01.032982    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:01.033018    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:01.033028    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:01.033056    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:01.033068    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:01.033085    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:01.033094    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:01.033111    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:01.033131    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:01.033144    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:01.033151    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:01.033158    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:01.033176    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:01.033192    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:01.033201    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:01.033208    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:01.033223    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:01.033234    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:01.033247    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:03.034457    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 10
	I0819 11:28:03.034476    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:03.034546    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:03.035353    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:03.035406    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:03.035416    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:03.035424    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:03.035433    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:03.035443    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:03.035451    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:03.035470    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:03.035482    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:03.035490    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:03.035497    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:03.035518    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:03.035530    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:03.035537    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:03.035549    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:03.035562    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:03.035579    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:03.035593    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:03.035623    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:05.036874    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 11
	I0819 11:28:05.036888    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:05.036944    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:05.037780    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:05.037838    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:05.037848    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:05.037865    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:05.037871    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:05.037885    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:05.037895    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:05.037918    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:05.037929    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:05.037945    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:05.037957    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:05.037967    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:05.037979    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:05.037987    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:05.037993    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:05.038009    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:05.038022    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:05.038030    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:05.038036    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:07.039954    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 12
	I0819 11:28:07.039964    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:07.040021    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:07.040890    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:07.040934    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:07.040946    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:07.040963    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:07.040972    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:07.040980    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:07.040988    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:07.041003    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:07.041017    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:07.041027    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:07.041032    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:07.041044    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:07.041057    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:07.041067    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:07.041075    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:07.041084    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:07.041093    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:07.041109    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:07.041120    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:09.042284    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 13
	I0819 11:28:09.042297    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:09.042362    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:09.043257    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:09.043315    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:09.043323    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:09.043333    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:09.043339    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:09.043347    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:09.043354    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:09.043361    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:09.043367    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:09.043373    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:09.043387    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:09.043394    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:09.043416    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:09.043427    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:09.043436    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:09.043445    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:09.043453    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:09.043461    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:09.043468    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:11.045175    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 14
	I0819 11:28:11.045187    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:11.045242    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:11.046024    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:11.046089    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:11.046103    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:11.046117    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:11.046133    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:11.046143    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:11.046159    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:11.046170    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:11.046191    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:11.046205    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:11.046225    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:11.046235    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:11.046245    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:11.046255    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:11.046266    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:11.046274    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:11.046283    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:11.046291    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:11.046300    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:13.048335    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 15
	I0819 11:28:13.048351    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:13.048395    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:13.049316    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:13.049348    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:13.049356    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:13.049364    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:13.049372    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:13.049379    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:13.049387    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:13.049394    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:13.049407    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:13.049413    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:13.049423    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:13.049440    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:13.049451    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:13.049460    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:13.049468    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:13.049476    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:13.049484    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:13.049491    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:13.049497    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:15.050544    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 16
	I0819 11:28:15.050556    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:15.050611    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:15.051378    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:15.051431    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:15.051442    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:15.051461    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:15.051475    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:15.051484    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:15.051492    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:15.051499    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:15.051507    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:15.051514    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:15.051520    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:15.051526    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:15.051533    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:15.051552    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:15.051561    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:15.051574    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:15.051586    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:15.051602    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:15.051616    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:17.052319    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 17
	I0819 11:28:17.052337    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:17.052401    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:17.053402    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:17.053458    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:17.053468    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:17.053477    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:17.053485    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:17.053492    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:17.053500    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:17.053508    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:17.053514    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:17.053520    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:17.053526    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:17.053538    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:17.053546    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:17.053557    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:17.053568    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:17.053577    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:17.053585    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:17.053592    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:17.053600    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:19.055618    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 18
	I0819 11:28:19.055632    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:19.055804    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:19.056632    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:19.056677    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:19.056688    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:19.056704    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:19.056717    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:19.056725    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:19.056735    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:19.056751    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:19.056759    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:19.056782    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:19.056795    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:19.056805    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:19.056812    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:19.056821    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:19.056834    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:19.056844    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:19.056851    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:19.056859    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:19.056875    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:21.058848    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 19
	I0819 11:28:21.058861    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:21.058935    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:21.059777    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:21.059809    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:21.059825    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:21.059838    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:21.059845    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:21.059852    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:21.059860    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:21.059876    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:21.059888    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:21.059896    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:21.059905    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:21.059914    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:21.059929    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:21.059941    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:21.059950    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:21.059957    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:21.059964    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:21.059972    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:21.059981    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:23.062010    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 20
	I0819 11:28:23.062025    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:23.062066    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:23.062845    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:23.062902    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:23.062926    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:23.062940    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:23.062949    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:23.062957    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:23.062970    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:23.062984    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:23.062997    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:23.063006    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:23.063012    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:23.063021    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:23.063027    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:23.063034    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:23.063046    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:23.063061    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:23.063068    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:23.063075    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:23.063081    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:25.064819    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 21
	I0819 11:28:25.064831    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:25.064899    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:25.065748    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:25.065803    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:25.065819    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:25.065829    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:25.065836    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:25.065847    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:25.065857    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:25.065865    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:25.065876    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:25.065883    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:25.065897    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:25.065904    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:25.065912    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:25.065927    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:25.065938    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:25.065955    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:25.065966    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:25.065974    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:25.065983    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:27.066957    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 22
	I0819 11:28:27.066972    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:27.067085    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:27.067968    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:27.068009    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:27.068019    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:27.068029    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:27.068036    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:27.068043    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:27.068051    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:27.068057    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:27.068063    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:27.068069    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:27.068083    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:27.068092    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:27.068101    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:27.068109    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:27.068117    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:27.068126    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:27.068133    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:27.068139    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:27.068146    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:29.069513    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 23
	I0819 11:28:29.069524    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:29.069612    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:29.070593    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:29.070638    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:29.070648    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:29.070665    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:29.070676    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:29.070686    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:29.070695    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:29.070712    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:29.070727    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:29.070734    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:29.070742    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:29.070750    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:29.070757    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:29.070763    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:29.070769    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:29.070774    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:29.070780    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:29.070791    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:29.070798    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:31.072826    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 24
	I0819 11:28:31.072839    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:31.072893    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:31.073710    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:31.073756    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:31.073803    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:31.073816    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:31.073822    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:31.073828    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:31.073834    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:31.073846    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:31.073853    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:31.073859    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:31.073868    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:31.073875    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:31.073881    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:31.073898    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:31.073910    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:31.073918    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:31.073927    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:31.073934    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:31.073940    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:33.075999    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 25
	I0819 11:28:33.076013    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:33.076055    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:33.077069    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:33.077116    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:33.077125    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:33.077134    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:33.077144    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:33.077156    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:33.077165    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:33.077181    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:33.077194    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:33.077202    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:33.077210    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:33.077220    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:33.077228    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:33.077235    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:33.077243    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:33.077250    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:33.077265    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:33.077272    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:33.077281    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:35.079264    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 26
	I0819 11:28:35.079277    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:35.079414    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:35.080228    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:35.080276    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:35.080290    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:35.080301    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:35.080318    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:35.080328    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:35.080335    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:35.080342    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:35.080354    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:35.080365    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:35.080373    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:35.080379    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:35.080386    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:35.080394    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:35.080407    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:35.080417    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:35.080426    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:35.080434    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:35.080462    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:37.082472    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 27
	I0819 11:28:37.082485    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:37.082545    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:37.083383    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:37.083436    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:37.083446    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:37.083464    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:37.083473    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:37.083483    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:37.083492    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:37.083505    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:37.083516    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:37.083525    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:37.083532    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:37.083543    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:37.083564    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:37.083587    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:37.083599    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:37.083611    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:37.083619    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:37.083627    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:37.083635    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:39.085576    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 28
	I0819 11:28:39.085991    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:39.086005    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:39.086443    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:39.086497    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:39.086532    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:39.086584    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:39.086614    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:39.086629    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:39.086639    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:39.086647    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:39.086659    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:39.086667    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:39.086675    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:39.086686    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:39.086694    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:39.086702    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:39.086711    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:39.086718    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:39.086728    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:39.086735    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:39.086742    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:41.088696    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Attempt 29
	I0819 11:28:41.088717    8807 main.go:141] libmachine: (docker-flags-328000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:28:41.088765    8807 main.go:141] libmachine: (docker-flags-328000) DBG | hyperkit pid from json: 8866
	I0819 11:28:41.089547    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Searching for e2:15:d9:19:2f:dc in /var/db/dhcpd_leases ...
	I0819 11:28:41.089592    8807 main.go:141] libmachine: (docker-flags-328000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:28:41.089603    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:28:41.089612    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:28:41.089619    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:28:41.089626    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:28:41.089634    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:28:41.089685    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:28:41.089721    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:28:41.089761    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:28:41.089778    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:28:41.089788    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:28:41.089796    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:28:41.089803    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:28:41.089811    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:28:41.089828    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:28:41.089840    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:28:41.089848    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:28:41.089854    8807 main.go:141] libmachine: (docker-flags-328000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:28:43.091904    8807 client.go:171] duration metric: took 1m1.09589146s to LocalClient.Create
	I0819 11:28:45.092870    8807 start.go:128] duration metric: took 1m3.170172354s to createHost
	I0819 11:28:45.092905    8807 start.go:83] releasing machines lock for "docker-flags-328000", held for 1m3.170308684s
	W0819 11:28:45.092987    8807 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-328000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e2:15:d9:19:2f:dc
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-328000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e2:15:d9:19:2f:dc
	I0819 11:28:45.155805    8807 out.go:201] 
	W0819 11:28:45.176969    8807 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e2:15:d9:19:2f:dc
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e2:15:d9:19:2f:dc
	W0819 11:28:45.176984    8807 out.go:270] * 
	* 
	W0819 11:28:45.177573    8807 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:28:45.239889    8807 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-328000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-328000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-328000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (178.788265ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-328000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-328000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-328000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-328000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (177.345968ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-328000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-328000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-328000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-19 11:28:45.70765 -0700 PDT m=+5829.479070792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-328000 -n docker-flags-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-328000 -n docker-flags-328000: exit status 7 (76.961868ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 11:28:45.782703    8895 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0819 11:28:45.782724    8895 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-328000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-328000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-328000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-328000: (5.238593561s)
--- FAIL: TestDockerFlags (252.12s)

                                                
                                    
x
+
TestForceSystemdFlag (251.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-220000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-220000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.372889923s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-220000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-220000" primary control-plane node in "force-systemd-flag-220000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-220000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:23:35.822942    8772 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:23:35.823551    8772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:23:35.823560    8772 out.go:358] Setting ErrFile to fd 2...
	I0819 11:23:35.823566    8772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:23:35.824206    8772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 11:23:35.825820    8772 out.go:352] Setting JSON to false
	I0819 11:23:35.848714    8772 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6785,"bootTime":1724085030,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 11:23:35.848828    8772 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:23:35.870198    8772 out.go:177] * [force-systemd-flag-220000] minikube v1.33.1 on Darwin 14.6.1
	I0819 11:23:35.912764    8772 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 11:23:35.912789    8772 notify.go:220] Checking for updates...
	I0819 11:23:35.954823    8772 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 11:23:35.975615    8772 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 11:23:35.996829    8772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:23:36.017720    8772 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:23:36.038548    8772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:23:36.060240    8772 config.go:182] Loaded profile config "force-systemd-env-102000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:23:36.060336    8772 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:23:36.089689    8772 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 11:23:36.131685    8772 start.go:297] selected driver: hyperkit
	I0819 11:23:36.131700    8772 start.go:901] validating driver "hyperkit" against <nil>
	I0819 11:23:36.131712    8772 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:23:36.134812    8772 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:23:36.134938    8772 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 11:23:36.143559    8772 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 11:23:36.147528    8772 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:23:36.147558    8772 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 11:23:36.147597    8772 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:23:36.147813    8772 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:23:36.147865    8772 cni.go:84] Creating CNI manager for ""
	I0819 11:23:36.147881    8772 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:23:36.147888    8772 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:23:36.147957    8772 start.go:340] cluster config:
	{Name:force-systemd-flag-220000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:23:36.148041    8772 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:23:36.190477    8772 out.go:177] * Starting "force-systemd-flag-220000" primary control-plane node in "force-systemd-flag-220000" cluster
	I0819 11:23:36.211701    8772 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:23:36.211734    8772 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 11:23:36.211750    8772 cache.go:56] Caching tarball of preloaded images
	I0819 11:23:36.211870    8772 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 11:23:36.211881    8772 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:23:36.211964    8772 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/force-systemd-flag-220000/config.json ...
	I0819 11:23:36.211981    8772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/force-systemd-flag-220000/config.json: {Name:mke26a06b04514bab0244a571a2d1a5ca87d1084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:23:36.212323    8772 start.go:360] acquireMachinesLock for force-systemd-flag-220000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:24:33.156877    8772 start.go:364] duration metric: took 56.950419999s to acquireMachinesLock for "force-systemd-flag-220000"
	I0819 11:24:33.156920    8772 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:24:33.156984    8772 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 11:24:33.178475    8772 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:24:33.178649    8772 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:24:33.178723    8772 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:24:33.187616    8772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53785
	I0819 11:24:33.188083    8772 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:24:33.188564    8772 main.go:141] libmachine: Using API Version  1
	I0819 11:24:33.188594    8772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:24:33.188838    8772 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:24:33.188952    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .GetMachineName
	I0819 11:24:33.189053    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .DriverName
	I0819 11:24:33.189160    8772 start.go:159] libmachine.API.Create for "force-systemd-flag-220000" (driver="hyperkit")
	I0819 11:24:33.189186    8772 client.go:168] LocalClient.Create starting
	I0819 11:24:33.189217    8772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 11:24:33.189265    8772 main.go:141] libmachine: Decoding PEM data...
	I0819 11:24:33.189289    8772 main.go:141] libmachine: Parsing certificate...
	I0819 11:24:33.189343    8772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 11:24:33.189383    8772 main.go:141] libmachine: Decoding PEM data...
	I0819 11:24:33.189391    8772 main.go:141] libmachine: Parsing certificate...
	I0819 11:24:33.189403    8772 main.go:141] libmachine: Running pre-create checks...
	I0819 11:24:33.189410    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .PreCreateCheck
	I0819 11:24:33.189490    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:33.189706    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .GetConfigRaw
	I0819 11:24:33.199569    8772 main.go:141] libmachine: Creating machine...
	I0819 11:24:33.199577    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .Create
	I0819 11:24:33.199705    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:33.199824    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | I0819 11:24:33.199690    8792 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:24:33.199877    8772 main.go:141] libmachine: (force-systemd-flag-220000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:24:33.630448    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | I0819 11:24:33.630389    8792 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/id_rsa...
	I0819 11:24:33.740462    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | I0819 11:24:33.740412    8792 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/force-systemd-flag-220000.rawdisk...
	I0819 11:24:33.740476    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Writing magic tar header
	I0819 11:24:33.740500    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Writing SSH key tar header
	I0819 11:24:33.740884    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | I0819 11:24:33.740848    8792 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000 ...
	I0819 11:24:34.164365    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:34.164390    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/hyperkit.pid
	I0819 11:24:34.164401    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Using UUID e350436e-f7a8-42e7-87a5-d8b8c94b7aaa
	I0819 11:24:34.190154    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Generated MAC 1e:97:ff:4:df:6d
	I0819 11:24:34.190175    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-220000
	I0819 11:24:34.190216    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e350436e-f7a8-42e7-87a5-d8b8c94b7aaa", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 11:24:34.190254    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e350436e-f7a8-42e7-87a5-d8b8c94b7aaa", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 11:24:34.190301    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e350436e-f7a8-42e7-87a5-d8b8c94b7aaa", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/force-systemd-flag-220000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/fo
rce-systemd-flag-220000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-220000"}
	I0819 11:24:34.190345    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e350436e-f7a8-42e7-87a5-d8b8c94b7aaa -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/force-systemd-flag-220000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/bzimage,/Users/jenkins/minikube-integr
ation/19478-1622/.minikube/machines/force-systemd-flag-220000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-220000"
	I0819 11:24:34.190370    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 11:24:34.193319    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 DEBUG: hyperkit: Pid is 8806
	I0819 11:24:34.193743    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 0
	I0819 11:24:34.193762    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:34.193829    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:34.194786    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:34.194869    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:34.194891    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:34.194926    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:34.194953    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:34.194966    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:34.194980    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:34.194991    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:34.195002    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:34.195015    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:34.195023    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:34.195032    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:34.195044    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:34.195054    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:34.195064    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:34.195087    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:34.195101    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:34.195117    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:34.195134    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:34.201077    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 11:24:34.209305    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 11:24:34.210160    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:24:34.210186    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:24:34.210215    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:24:34.210238    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:24:34.584772    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 11:24:34.584797    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 11:24:34.700005    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:24:34.700026    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:24:34.700074    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:24:34.700104    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:24:34.700899    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 11:24:34.700912    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 11:24:36.195595    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 1
	I0819 11:24:36.195612    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:36.195724    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:36.196533    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:36.196600    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:36.196612    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:36.196621    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:36.196628    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:36.196640    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:36.196704    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:36.196752    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:36.196761    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:36.196769    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:36.196776    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:36.196783    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:36.196793    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:36.196804    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:36.196821    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:36.196838    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:36.196847    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:36.196857    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:36.196867    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:38.197387    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 2
	I0819 11:24:38.197408    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:38.197422    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:38.198262    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:38.198293    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:38.198301    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:38.198311    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:38.198317    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:38.198324    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:38.198329    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:38.198355    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:38.198367    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:38.198383    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:38.198392    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:38.198400    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:38.198408    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:38.198415    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:38.198423    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:38.198437    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:38.198445    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:38.198452    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:38.198460    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:40.095625    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 11:24:40.095755    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 11:24:40.095766    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 11:24:40.115660    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:24:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 11:24:40.198984    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 3
	I0819 11:24:40.199015    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:40.199183    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:40.200656    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:40.200763    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:40.200779    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:40.200801    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:40.200814    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:40.200829    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:40.200845    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:40.200859    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:40.200871    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:40.200901    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:40.200934    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:40.200968    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:40.200994    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:40.201025    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:40.201070    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:40.201106    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:40.201116    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:40.201163    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:40.201176    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:42.202775    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 4
	I0819 11:24:42.202788    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:42.202870    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:42.203680    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:42.203736    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:42.203751    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:42.203762    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:42.203779    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:42.203786    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:42.203792    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:42.203801    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:42.203834    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:42.203846    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:42.203853    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:42.203861    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:42.203870    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:42.203878    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:42.203886    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:42.203894    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:42.203902    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:42.203915    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:42.203931    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:44.205423    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 5
	I0819 11:24:44.205436    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:44.205475    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:44.206265    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:44.206310    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:44.206321    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:44.206351    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:44.206363    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:44.206372    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:44.206381    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:44.206396    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:44.206410    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:44.206423    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:44.206431    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:44.206462    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:44.206475    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:44.206484    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:44.206493    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:44.206501    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:44.206508    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:44.206515    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:44.206523    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:46.206742    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 6
	I0819 11:24:46.206756    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:46.206862    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:46.207900    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:46.207936    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:46.207944    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:46.207952    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:46.207960    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:46.207975    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:46.207989    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:46.207999    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:46.208009    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:46.208016    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:46.208024    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:46.208040    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:46.208052    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:46.208060    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:46.208067    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:46.208075    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:46.208085    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:46.208096    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:46.208106    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:48.210050    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 7
	I0819 11:24:48.210061    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:48.210157    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:48.210979    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:48.210988    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:48.210997    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:48.211003    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:48.211024    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:48.211038    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:48.211048    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:48.211057    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:48.211064    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:48.211071    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:48.211079    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:48.211088    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:48.211103    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:48.211110    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:48.211116    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:48.211122    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:48.211129    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:48.211137    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:48.211146    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:50.211722    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 8
	I0819 11:24:50.211735    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:50.211791    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:50.212615    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:50.212665    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:50.212678    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:50.212686    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:50.212692    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:50.212701    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:50.212709    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:50.212716    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:50.212725    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:50.212731    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:50.212738    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:50.212745    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:50.212751    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:50.212760    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:50.212774    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:50.212787    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:50.212807    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:50.212820    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:50.212831    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:52.213640    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 9
	I0819 11:24:52.213652    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:52.213719    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:52.214536    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:52.214561    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:52.214569    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:52.214580    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:52.214589    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:52.214596    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:52.214603    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:52.214624    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:52.214636    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:52.214654    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:52.214669    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:52.214687    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:52.214695    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:52.214706    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:52.214713    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:52.214725    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:52.214738    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:52.214747    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:52.214755    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:54.215058    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 10
	I0819 11:24:54.215072    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:54.215120    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:54.215950    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:54.216013    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:54.216031    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:54.216050    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:54.216063    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:54.216072    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:54.216081    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:54.216099    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:54.216107    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:54.216116    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:54.216125    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:54.216140    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:54.216154    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:54.216162    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:54.216170    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:54.216177    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:54.216183    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:54.216199    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:54.216213    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:56.217035    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 11
	I0819 11:24:56.217062    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:56.217130    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:56.218099    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:56.218133    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:56.218141    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:56.218151    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:56.218162    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:56.218176    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:56.218188    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:56.218202    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:56.218212    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:56.218228    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:56.218242    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:56.218259    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:56.218269    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:56.218276    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:56.218285    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:56.218301    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:56.218313    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:56.218322    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:56.218344    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:58.218769    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 12
	I0819 11:24:58.218784    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:58.218846    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:24:58.219627    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:24:58.219686    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:58.219697    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:58.219712    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:58.219723    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:58.219733    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:58.219741    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:58.219758    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:58.219779    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:58.219787    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:58.219796    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:58.219803    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:58.219811    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:58.219819    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:58.219826    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:58.219833    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:58.219844    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:58.219852    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:58.219860    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:00.220076    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 13
	I0819 11:25:00.220092    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:00.220154    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:00.220993    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:00.221017    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:00.221039    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:00.221050    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:00.221059    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:00.221066    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:00.221072    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:00.221079    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:00.221088    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:00.221096    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:00.221104    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:00.221116    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:00.221127    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:00.221136    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:00.221145    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:00.221157    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:00.221165    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:00.221173    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:00.221179    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:02.223245    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 14
	I0819 11:25:02.223259    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:02.223305    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:02.224083    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:02.224127    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:02.224138    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:02.224146    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:02.224153    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:02.224169    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:02.224175    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:02.224183    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:02.224190    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:02.224203    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:02.224216    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:02.224230    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:02.224239    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:02.224253    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:02.224266    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:02.224283    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:02.224290    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:02.224296    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:02.224304    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:04.226321    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 15
	I0819 11:25:04.226336    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:04.226387    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:04.227185    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:04.227244    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:04.227257    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:04.227271    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:04.227283    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:04.227292    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:04.227301    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:04.227307    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:04.227313    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:04.227321    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:04.227329    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:04.227335    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:04.227350    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:04.227357    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:04.227364    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:04.227378    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:04.227390    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:04.227397    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:04.227405    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:06.229360    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 16
	I0819 11:25:06.229374    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:06.229443    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:06.230222    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:06.230275    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:06.230284    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:06.230293    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:06.230303    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:06.230312    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:06.230324    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:06.230334    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:06.230352    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:06.230362    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:06.230370    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:06.230377    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:06.230384    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:06.230391    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:06.230413    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:06.230432    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:06.230446    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:06.230454    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:06.230466    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:08.232416    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 17
	I0819 11:25:08.232432    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:08.232487    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:08.233302    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:08.233347    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:08.233357    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:08.233375    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:08.233386    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:08.233396    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:08.233407    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:08.233414    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:08.233423    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:08.233436    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:08.233447    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:08.233454    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:08.233470    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:08.233486    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:08.233501    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:08.233518    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:08.233532    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:08.233540    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:08.233548    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:10.234645    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 18
	I0819 11:25:10.234661    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:10.234740    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:10.235721    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:10.235766    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:10.235779    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:10.235787    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:10.235797    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:10.235804    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:10.235811    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:10.235824    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:10.235831    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:10.235841    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:10.235848    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:10.235857    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:10.235866    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:10.235875    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:10.235887    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:10.235895    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:10.235905    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:10.235913    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:10.235927    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:12.237970    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 19
	I0819 11:25:12.237982    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:12.238036    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:12.238835    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:12.238898    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:12.238909    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:12.238916    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:12.238924    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:12.238931    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:12.238946    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:12.238953    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:12.238961    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:12.238970    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:12.238979    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:12.238985    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:12.238998    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:12.239005    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:12.239011    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:12.239019    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:12.239032    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:12.239040    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:12.239048    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:14.239404    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 20
	I0819 11:25:14.239418    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:14.239491    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:14.240277    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:14.240333    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:14.240349    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:14.240362    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:14.240378    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:14.240386    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:14.240392    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:14.240406    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:14.240418    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:14.240427    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:14.240436    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:14.240451    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:14.240466    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:14.240475    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:14.240483    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:14.240495    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:14.240503    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:14.240513    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:14.240520    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:16.242516    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 21
	I0819 11:25:16.242532    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:16.242576    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:16.243476    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:16.243532    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:16.243540    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:16.243548    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:16.243565    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:16.243576    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:16.243582    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:16.243590    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:16.243598    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:16.243613    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:16.243620    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:16.243629    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:16.243637    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:16.243643    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:16.243651    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:16.243657    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:16.243669    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:16.243686    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:16.243699    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:18.244354    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 22
	I0819 11:25:18.244367    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:18.244416    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:18.245310    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:18.245378    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:18.245388    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:18.245405    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:18.245412    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:18.245418    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:18.245425    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:18.245441    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:18.245454    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:18.245462    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:18.245470    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:18.245478    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:18.245486    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:18.245498    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:18.245509    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:18.245525    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:18.245537    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:18.245548    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:18.245564    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:20.246175    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 23
	I0819 11:25:20.246187    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:20.246255    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:20.247162    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:20.247187    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:20.247197    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:20.247204    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:20.247211    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:20.247218    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:20.247233    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:20.247240    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:20.247253    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:20.247261    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:20.247269    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:20.247276    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:20.247284    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:20.247291    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:20.247299    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:20.247305    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:20.247313    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:20.247319    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:20.247329    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:22.248471    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 24
	I0819 11:25:22.248483    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:22.248537    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:22.249382    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:22.249394    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:22.249403    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:22.249410    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:22.249427    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:22.249440    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:22.249448    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:22.249457    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:22.249467    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:22.249480    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:22.249488    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:22.249494    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:22.249510    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:22.249520    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:22.249528    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:22.249536    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:22.249553    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:22.249564    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:22.249575    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:24.251582    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 25
	I0819 11:25:24.251598    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:24.251632    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:24.252434    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:24.252494    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:24.252507    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:24.252520    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:24.252531    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:24.252548    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:24.252555    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:24.252561    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:24.252570    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:24.252586    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:24.252597    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:24.252607    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:24.252614    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:24.252621    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:24.252633    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:24.252640    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:24.252647    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:24.252654    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:24.252666    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:26.254026    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 26
	I0819 11:25:26.254041    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:26.254156    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:26.255199    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:26.255239    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:26.255247    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:26.255258    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:26.255266    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:26.255274    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:26.255280    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:26.255286    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:26.255302    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:26.255320    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:26.255332    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:26.255348    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:26.255357    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:26.255364    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:26.255371    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:26.255379    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:26.255386    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:26.255401    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:26.255412    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:28.257387    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 27
	I0819 11:25:28.257403    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:28.257498    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:28.258412    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:28.258461    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:28.258472    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:28.258481    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:28.258486    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:28.258503    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:28.258516    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:28.258524    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:28.258530    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:28.258544    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:28.258557    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:28.258579    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:28.258589    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:28.258599    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:28.258609    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:28.258623    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:28.258631    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:28.258638    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:28.258646    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:30.259950    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 28
	I0819 11:25:30.259967    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:30.259998    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:30.260946    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:30.260986    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:30.260996    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:30.261007    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:30.261015    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:30.261023    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:30.261042    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:30.261050    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:30.261057    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:30.261065    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:30.261071    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:30.261078    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:30.261084    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:30.261101    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:30.261112    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:30.261125    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:30.261134    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:30.261141    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:30.261152    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:32.262472    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 29
	I0819 11:25:32.262501    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:32.262524    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:32.263313    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 1e:97:ff:4:df:6d in /var/db/dhcpd_leases ...
	I0819 11:25:32.263364    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:25:32.263373    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:25:32.263381    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:25:32.263388    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:25:32.263406    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:25:32.263420    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:25:32.263434    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:25:32.263444    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:25:32.263453    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:25:32.263460    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:25:32.263466    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:25:32.263474    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:25:32.263483    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:25:32.263491    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:25:32.263497    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:25:32.263511    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:25:32.263519    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:25:32.263529    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:25:34.264975    8772 client.go:171] duration metric: took 1m1.076257601s to LocalClient.Create
	I0819 11:25:36.267087    8772 start.go:128] duration metric: took 1m3.11058455s to createHost
	I0819 11:25:36.267107    8772 start.go:83] releasing machines lock for "force-systemd-flag-220000", held for 1m3.110710096s
	W0819 11:25:36.267123    8772 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:97:ff:4:df:6d
	I0819 11:25:36.267411    8772 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:25:36.267432    8772 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:25:36.276060    8772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53801
	I0819 11:25:36.276398    8772 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:25:36.276781    8772 main.go:141] libmachine: Using API Version  1
	I0819 11:25:36.276805    8772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:25:36.277054    8772 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:25:36.277424    8772 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:25:36.277442    8772 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:25:36.285793    8772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53803
	I0819 11:25:36.286126    8772 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:25:36.286462    8772 main.go:141] libmachine: Using API Version  1
	I0819 11:25:36.286472    8772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:25:36.286704    8772 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:25:36.286810    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .GetState
	I0819 11:25:36.286896    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:36.286980    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:36.287940    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .DriverName
	I0819 11:25:36.330386    8772 out.go:177] * Deleting "force-systemd-flag-220000" in hyperkit ...
	I0819 11:25:36.372276    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .Remove
	I0819 11:25:36.372403    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:36.372411    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:36.372478    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:36.373438    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:36.373492    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | waiting for graceful shutdown
	I0819 11:25:37.375582    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:37.375683    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:37.376607    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | waiting for graceful shutdown
	I0819 11:25:38.378135    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:38.378214    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:38.379823    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | waiting for graceful shutdown
	I0819 11:25:39.381961    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:39.382033    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:39.382716    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | waiting for graceful shutdown
	I0819 11:25:40.384282    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:40.384367    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:40.384915    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | waiting for graceful shutdown
	I0819 11:25:41.386194    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:25:41.386291    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8806
	I0819 11:25:41.387318    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | sending sigkill
	I0819 11:25:41.387328    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0819 11:25:41.400426    8772 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:97:ff:4:df:6d
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:97:ff:4:df:6d
	I0819 11:25:41.400445    8772 start.go:729] Will try again in 5 seconds ...
	I0819 11:25:41.409614    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:25:41 WARN : hyperkit: failed to read stdout: EOF
	I0819 11:25:41.409635    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:25:41 WARN : hyperkit: failed to read stderr: EOF
	I0819 11:25:46.402484    8772 start.go:360] acquireMachinesLock for force-systemd-flag-220000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:26:39.111447    8772 start.go:364] duration metric: took 52.709224089s to acquireMachinesLock for "force-systemd-flag-220000"
	I0819 11:26:39.111470    8772 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:26:39.111536    8772 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 11:26:39.132954    8772 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:26:39.133022    8772 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:26:39.133077    8772 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:26:39.141574    8772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53811
	I0819 11:26:39.141915    8772 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:26:39.142250    8772 main.go:141] libmachine: Using API Version  1
	I0819 11:26:39.142262    8772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:26:39.142475    8772 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:26:39.142592    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .GetMachineName
	I0819 11:26:39.142678    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .DriverName
	I0819 11:26:39.142785    8772 start.go:159] libmachine.API.Create for "force-systemd-flag-220000" (driver="hyperkit")
	I0819 11:26:39.142802    8772 client.go:168] LocalClient.Create starting
	I0819 11:26:39.142827    8772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 11:26:39.142875    8772 main.go:141] libmachine: Decoding PEM data...
	I0819 11:26:39.142886    8772 main.go:141] libmachine: Parsing certificate...
	I0819 11:26:39.142942    8772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 11:26:39.142992    8772 main.go:141] libmachine: Decoding PEM data...
	I0819 11:26:39.143001    8772 main.go:141] libmachine: Parsing certificate...
	I0819 11:26:39.143018    8772 main.go:141] libmachine: Running pre-create checks...
	I0819 11:26:39.143023    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .PreCreateCheck
	I0819 11:26:39.143095    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:39.143163    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .GetConfigRaw
	I0819 11:26:39.174721    8772 main.go:141] libmachine: Creating machine...
	I0819 11:26:39.174732    8772 main.go:141] libmachine: (force-systemd-flag-220000) Calling .Create
	I0819 11:26:39.174815    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:39.174950    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | I0819 11:26:39.174812    8835 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:26:39.175000    8772 main.go:141] libmachine: (force-systemd-flag-220000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:26:39.379941    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | I0819 11:26:39.379862    8835 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/id_rsa...
	I0819 11:26:39.441430    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | I0819 11:26:39.441347    8835 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/force-systemd-flag-220000.rawdisk...
	I0819 11:26:39.441440    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Writing magic tar header
	I0819 11:26:39.441448    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Writing SSH key tar header
	I0819 11:26:39.442011    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | I0819 11:26:39.441975    8835 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000 ...
	I0819 11:26:39.821982    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:39.822008    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/hyperkit.pid
	I0819 11:26:39.822024    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Using UUID 838888cb-d260-4341-9809-1826982af049
	I0819 11:26:39.847011    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Generated MAC 3e:51:63:a8:76:fb
	I0819 11:26:39.847027    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-220000
	I0819 11:26:39.847060    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"838888cb-d260-4341-9809-1826982af049", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001301b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 11:26:39.847090    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"838888cb-d260-4341-9809-1826982af049", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001301b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 11:26:39.847160    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "838888cb-d260-4341-9809-1826982af049", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/force-systemd-flag-220000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/fo
rce-systemd-flag-220000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-220000"}
	I0819 11:26:39.847205    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 838888cb-d260-4341-9809-1826982af049 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/force-systemd-flag-220000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/bzimage,/Users/jenkins/minikube-integr
ation/19478-1622/.minikube/machines/force-systemd-flag-220000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-220000"
	I0819 11:26:39.847214    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 11:26:39.850323    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 DEBUG: hyperkit: Pid is 8836
	I0819 11:26:39.851505    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 0
	I0819 11:26:39.851527    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:39.851610    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:26:39.852607    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:26:39.852672    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:39.852686    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:39.852718    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:39.852729    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:39.852736    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:39.852742    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:39.852749    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:39.852756    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:39.852767    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:39.852774    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:39.852780    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:39.852795    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:39.852806    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:39.852814    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:39.852822    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:39.852845    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:39.852863    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:39.852885    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:39.857682    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 11:26:39.865760    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-flag-220000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 11:26:39.866626    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:26:39.866640    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:26:39.866649    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:26:39.866655    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:26:40.243143    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 11:26:40.243157    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 11:26:40.357784    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:26:40.357800    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:26:40.357826    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:26:40.357852    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:26:40.358675    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 11:26:40.358686    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 11:26:41.853422    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 1
	I0819 11:26:41.853435    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:41.853538    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:26:41.854327    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:26:41.854392    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:41.854403    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:41.854412    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:41.854424    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:41.854432    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:41.854446    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:41.854458    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:41.854465    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:41.854476    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:41.854484    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:41.854494    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:41.854501    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:41.854515    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:41.854526    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:41.854534    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:41.854542    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:41.854549    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:41.854558    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:43.855080    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 2
	I0819 11:26:43.855096    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:43.855204    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:26:43.855969    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:26:43.856028    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:43.856040    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:43.856051    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:43.856064    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:43.856085    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:43.856103    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:43.856115    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:43.856125    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:43.856133    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:43.856148    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:43.856162    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:43.856174    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:43.856184    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:43.856193    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:43.856204    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:43.856214    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:43.856223    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:43.856231    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:45.734034    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 11:26:45.734192    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 11:26:45.734203    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 11:26:45.754044    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | 2024/08/19 11:26:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 11:26:45.857095    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 3
	I0819 11:26:45.857118    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:45.857247    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:26:45.858743    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:26:45.858842    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:45.858862    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:45.858881    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:45.858894    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:45.858943    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:45.858960    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:45.858973    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:45.858984    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:45.859003    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:45.859018    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:45.859028    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:45.859040    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:45.859049    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:45.859058    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:45.859079    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:45.859095    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:45.859106    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:45.859128    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:47.859524    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 4
	I0819 11:26:47.859540    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:47.859620    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:26:47.860404    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:26:47.860469    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:47.860480    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:47.860507    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:47.860525    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:47.860548    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:47.860555    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:47.860563    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:47.860572    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:47.860578    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:47.860593    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:47.860605    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:47.860615    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:47.860631    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:47.860640    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:47.860650    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:47.860659    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:47.860668    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:47.860683    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:49.861207    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 5
	I0819 11:26:49.861223    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:49.861300    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:26:49.862134    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:26:49.862188    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:49.862198    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:49.862206    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:49.862213    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:49.862230    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:49.862243    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:49.862254    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:49.862262    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:49.862274    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:49.862281    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:49.862289    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:49.862298    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:49.862305    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:49.862313    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:49.862320    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:49.862330    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:49.862345    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:49.862357    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:51.864359    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 6
	I0819 11:26:51.864374    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:51.864436    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:26:51.865235    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:26:51.865281    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:51.865293    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:51.865316    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:51.865326    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:51.865334    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:51.865350    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:51.865359    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:51.865368    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:51.865376    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:51.865386    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:51.865394    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:51.865401    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:51.865414    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:51.865421    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:51.865427    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:51.865434    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:51.865440    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:51.865450    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:53.865866    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 7
	I0819 11:26:53.865883    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:53.865953    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:26:53.866735    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:26:53.866775    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:53.866784    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:53.866794    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:53.866801    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:53.866813    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:53.866821    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:53.866828    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:53.866837    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:53.866844    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:53.866852    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:53.866868    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:53.866878    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:53.866885    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:53.866894    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:53.866901    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:53.866910    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:53.866917    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:53.866925    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:55.868965    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 8
	I0819 11:26:55.868980    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:55.869014    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:26:55.869807    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:26:55.869864    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:55.869875    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:55.869882    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:55.869889    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:55.869897    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:55.869905    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:55.869913    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:55.869919    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:55.869926    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:55.869932    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:55.869939    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:55.869948    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:55.869955    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:55.869963    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:55.869970    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:55.869976    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:55.869993    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:55.870005    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:57.871017    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 9
	I0819 11:26:57.871034    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:57.871105    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:26:57.871910    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:26:57.871940    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:57.871953    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:57.871966    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:57.871978    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:57.871986    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:57.871996    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:57.872003    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:57.872010    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:57.872023    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:57.872031    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:57.872041    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:57.872049    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:57.872061    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:57.872076    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:57.872086    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:57.872094    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:57.872108    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:57.872126    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:26:59.874132    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 10
	I0819 11:26:59.874147    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:26:59.874213    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:26:59.875034    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:26:59.875089    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:26:59.875100    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:26:59.875108    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:26:59.875115    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:26:59.875126    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:26:59.875137    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:26:59.875144    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:26:59.875153    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:26:59.875168    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:26:59.875189    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:26:59.875199    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:26:59.875210    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:26:59.875220    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:26:59.875229    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:26:59.875243    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:26:59.875256    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:26:59.875271    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:26:59.875285    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:01.876243    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 11
	I0819 11:27:01.876259    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:01.876337    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:01.877133    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:01.877171    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:01.877182    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:01.877192    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:01.877199    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:01.877205    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:01.877213    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:01.877220    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:01.877236    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:01.877245    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:01.877263    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:01.877275    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:01.877285    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:01.877294    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:01.877301    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:01.877309    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:01.877316    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:01.877325    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:01.877341    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:03.878710    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 12
	I0819 11:27:03.878723    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:03.878745    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:03.879558    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:03.879610    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:03.879625    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:03.879638    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:03.879647    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:03.879667    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:03.879692    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:03.879704    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:03.879712    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:03.879719    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:03.879727    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:03.879735    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:03.879744    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:03.879751    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:03.879759    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:03.879766    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:03.879772    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:03.879779    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:03.879786    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:05.880788    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 13
	I0819 11:27:05.880802    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:05.880884    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:05.881652    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:05.881705    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:05.881715    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:05.881724    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:05.881731    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:05.881739    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:05.881746    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:05.881761    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:05.881776    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:05.881787    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:05.881795    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:05.881803    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:05.881811    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:05.881825    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:05.881835    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:05.881842    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:05.881851    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:05.881858    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:05.881867    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:07.883181    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 14
	I0819 11:27:07.883200    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:07.883305    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:07.884133    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:07.884203    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:07.884218    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:07.884243    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:07.884259    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:07.884269    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:07.884277    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:07.884295    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:07.884319    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:07.884341    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:07.884355    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:07.884369    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:07.884380    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:07.884393    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:07.884404    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:07.884412    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:07.884419    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:07.884433    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:07.884447    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:09.885342    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 15
	I0819 11:27:09.885355    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:09.885419    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:09.886438    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:09.886463    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:09.886476    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:09.886485    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:09.886491    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:09.886509    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:09.886523    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:09.886531    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:09.886541    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:09.886560    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:09.886577    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:09.886589    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:09.886599    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:09.886608    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:09.886616    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:09.886624    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:09.886633    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:09.886649    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:09.886662    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:11.887241    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 16
	I0819 11:27:11.887254    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:11.887321    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:11.888272    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:11.888324    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:11.888332    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:11.888343    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:11.888357    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:11.888366    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:11.888372    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:11.888379    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:11.888386    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:11.888394    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:11.888402    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:11.888409    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:11.888417    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:11.888425    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:11.888433    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:11.888450    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:11.888464    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:11.888473    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:11.888480    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:13.889041    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 17
	I0819 11:27:13.889053    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:13.889124    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:13.889953    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:13.890007    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:13.890023    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:13.890039    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:13.890073    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:13.890083    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:13.890094    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:13.890103    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:13.890111    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:13.890117    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:13.890136    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:13.890150    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:13.890166    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:13.890175    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:13.890182    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:13.890189    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:13.890195    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:13.890202    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:13.890209    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:15.890252    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 18
	I0819 11:27:15.890264    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:15.890323    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:15.891167    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:15.891215    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:15.891227    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:15.891244    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:15.891256    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:15.891266    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:15.891273    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:15.891285    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:15.891294    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:15.891301    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:15.891307    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:15.891313    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:15.891321    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:15.891328    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:15.891335    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:15.891344    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:15.891353    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:15.891368    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:15.891380    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:17.893410    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 19
	I0819 11:27:17.893423    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:17.893492    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:17.894408    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:17.894452    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:17.894464    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:17.894473    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:17.894479    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:17.894496    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:17.894509    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:17.894518    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:17.894526    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:17.894534    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:17.894544    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:17.894551    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:17.894564    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:17.894572    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:17.894581    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:17.894589    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:17.894597    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:17.894605    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:17.894623    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:19.896617    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 20
	I0819 11:27:19.896630    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:19.896668    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:19.897784    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:19.897853    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:19.897865    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:19.897876    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:19.897889    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:19.897898    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:19.897906    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:19.897914    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:19.897921    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:19.897936    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:19.897950    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:19.897973    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:19.897987    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:19.897995    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:19.898002    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:19.898010    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:19.898018    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:19.898028    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:19.898036    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:21.899543    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 21
	I0819 11:27:21.899555    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:21.899611    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:21.900434    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:21.900491    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:21.900540    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:21.900556    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:21.900562    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:21.900568    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:21.900574    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:21.900582    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:21.900590    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:21.900598    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:21.900605    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:21.900611    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:21.900617    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:21.900629    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:21.900642    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:21.900651    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:21.900658    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:21.900665    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:21.900670    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:23.900779    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 22
	I0819 11:27:23.900799    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:23.900903    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:23.901782    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:23.901835    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:23.901848    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:23.901857    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:23.901863    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:23.901880    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:23.901897    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:23.901914    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:23.901926    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:23.901934    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:23.901942    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:23.901956    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:23.901970    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:23.901988    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:23.901996    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:23.902006    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:23.902014    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:23.902030    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:23.902042    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:25.904034    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 23
	I0819 11:27:25.904049    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:25.904089    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:25.904935    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:25.904984    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:25.904995    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:25.905005    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:25.905011    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:25.905024    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:25.905034    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:25.905052    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:25.905064    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:25.905073    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:25.905081    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:25.905097    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:25.905108    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:25.905123    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:25.905137    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:25.905147    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:25.905155    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:25.905164    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:25.905173    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:27.906744    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 24
	I0819 11:27:27.906756    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:27.906828    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:27.907616    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:27.907675    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:27.907691    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:27.907702    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:27.907712    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:27.907722    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:27.907730    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:27.907738    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:27.907750    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:27.907771    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:27.907784    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:27.907792    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:27.907799    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:27.907806    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:27.907813    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:27.907828    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:27.907847    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:27.907856    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:27.907862    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:29.909857    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 25
	I0819 11:27:29.909870    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:29.909931    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:29.910785    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:29.910827    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:29.910840    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:29.910853    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:29.910860    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:29.910875    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:29.910894    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:29.910903    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:29.910913    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:29.910920    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:29.910928    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:29.910935    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:29.910943    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:29.910966    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:29.910977    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:29.910990    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:29.910998    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:29.911012    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:29.911025    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:31.913020    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 26
	I0819 11:27:31.913033    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:31.913141    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:31.914042    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:31.914094    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:31.914106    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:31.914117    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:31.914124    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:31.914131    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:31.914139    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:31.914147    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:31.914158    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:31.914177    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:31.914188    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:31.914204    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:31.914215    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:31.914232    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:31.914246    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:31.914255    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:31.914262    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:31.914275    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:31.914285    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:33.914717    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 27
	I0819 11:27:33.914731    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:33.914798    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:33.915607    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:33.915666    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:33.915678    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:33.915706    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:33.915714    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:33.915732    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:33.915741    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:33.915748    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:33.915757    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:33.915764    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:33.915770    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:33.915777    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:33.915785    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:33.915801    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:33.915813    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:33.915823    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:33.915828    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:33.915848    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:33.915863    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:35.916385    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 28
	I0819 11:27:35.916815    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:35.916891    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:35.917286    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:35.917349    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:35.917368    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:35.917452    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:35.917480    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:35.917495    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:35.917506    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:35.917569    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:35.917667    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:35.917676    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:35.917691    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:35.917704    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:35.917716    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:35.917726    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:35.917744    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:35.917755    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:35.917772    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:35.917780    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:35.917806    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:37.917641    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Attempt 29
	I0819 11:27:37.917656    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:27:37.917732    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | hyperkit pid from json: 8836
	I0819 11:27:37.918489    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Searching for 3e:51:63:a8:76:fb in /var/db/dhcpd_leases ...
	I0819 11:27:37.918541    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:27:37.918554    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:27:37.918567    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:27:37.918575    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:27:37.918582    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:27:37.918589    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:27:37.918596    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:27:37.918604    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:27:37.918612    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:27:37.918620    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:27:37.918627    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:27:37.918637    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:27:37.918647    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:27:37.918656    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:27:37.918664    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:27:37.918671    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:27:37.918687    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:27:37.918698    8772 main.go:141] libmachine: (force-systemd-flag-220000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:27:39.920754    8772 client.go:171] duration metric: took 1m0.778280591s to LocalClient.Create
	I0819 11:27:41.922858    8772 start.go:128] duration metric: took 1m2.811657602s to createHost
	I0819 11:27:41.922874    8772 start.go:83] releasing machines lock for "force-systemd-flag-220000", held for 1m2.811762011s
	W0819 11:27:41.922963    8772 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-220000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3e:51:63:a8:76:fb
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-220000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3e:51:63:a8:76:fb
	I0819 11:27:42.007357    8772 out.go:201] 
	W0819 11:27:42.028249    8772 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3e:51:63:a8:76:fb
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3e:51:63:a8:76:fb
	W0819 11:27:42.028260    8772 out.go:270] * 
	* 
	W0819 11:27:42.028911    8772 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:27:42.091158    8772 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-220000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-220000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-220000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (176.567113ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-220000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-220000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-19 11:27:42.377185 -0700 PDT m=+5766.148254530
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-220000 -n force-systemd-flag-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-220000 -n force-systemd-flag-220000: exit status 7 (77.585574ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 11:27:42.452904    8857 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0819 11:27:42.452928    8857 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-220000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-220000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-220000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-220000: (5.245372179s)
--- FAIL: TestForceSystemdFlag (251.94s)

                                                
                                    
x
+
TestForceSystemdEnv (234.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-102000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-102000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m48.692936566s)

                                                
                                                
-- stdout --
	* [force-systemd-env-102000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-102000" primary control-plane node in "force-systemd-env-102000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-102000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:20:44.720669    8697 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:20:44.720853    8697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:20:44.720859    8697 out.go:358] Setting ErrFile to fd 2...
	I0819 11:20:44.720862    8697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:20:44.721043    8697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 11:20:44.722577    8697 out.go:352] Setting JSON to false
	I0819 11:20:44.745054    8697 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6614,"bootTime":1724085030,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 11:20:44.745142    8697 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:20:44.767164    8697 out.go:177] * [force-systemd-env-102000] minikube v1.33.1 on Darwin 14.6.1
	I0819 11:20:44.808717    8697 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 11:20:44.808760    8697 notify.go:220] Checking for updates...
	I0819 11:20:44.850506    8697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 11:20:44.871670    8697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 11:20:44.894716    8697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:20:44.917525    8697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:20:44.937566    8697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0819 11:20:44.959007    8697 config.go:182] Loaded profile config "offline-docker-509000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:20:44.959084    8697 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:20:44.987712    8697 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 11:20:45.029541    8697 start.go:297] selected driver: hyperkit
	I0819 11:20:45.029552    8697 start.go:901] validating driver "hyperkit" against <nil>
	I0819 11:20:45.029561    8697 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:20:45.032341    8697 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:20:45.032448    8697 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 11:20:45.040816    8697 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 11:20:45.044663    8697 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:20:45.044681    8697 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 11:20:45.044716    8697 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:20:45.044912    8697 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:20:45.044969    8697 cni.go:84] Creating CNI manager for ""
	I0819 11:20:45.044985    8697 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:20:45.044993    8697 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:20:45.045062    8697 start.go:340] cluster config:
	{Name:force-systemd-env-102000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-102000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:20:45.045147    8697 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:20:45.066742    8697 out.go:177] * Starting "force-systemd-env-102000" primary control-plane node in "force-systemd-env-102000" cluster
	I0819 11:20:45.108521    8697 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:20:45.108546    8697 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 11:20:45.108566    8697 cache.go:56] Caching tarball of preloaded images
	I0819 11:20:45.108681    8697 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 11:20:45.108691    8697 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:20:45.108764    8697 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/force-systemd-env-102000/config.json ...
	I0819 11:20:45.108781    8697 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/force-systemd-env-102000/config.json: {Name:mk894eef41851dd9f87cf0cbd0ac555a32d6e450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:20:45.109065    8697 start.go:360] acquireMachinesLock for force-systemd-env-102000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:21:24.337164    8697 start.go:364] duration metric: took 39.227929995s to acquireMachinesLock for "force-systemd-env-102000"
	I0819 11:21:24.337212    8697 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-102000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-102000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:21:24.337282    8697 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 11:21:24.358870    8697 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:21:24.359036    8697 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:21:24.359080    8697 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:21:24.368013    8697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53765
	I0819 11:21:24.368621    8697 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:21:24.369205    8697 main.go:141] libmachine: Using API Version  1
	I0819 11:21:24.369214    8697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:21:24.369489    8697 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:21:24.369603    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .GetMachineName
	I0819 11:21:24.369700    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .DriverName
	I0819 11:21:24.369802    8697 start.go:159] libmachine.API.Create for "force-systemd-env-102000" (driver="hyperkit")
	I0819 11:21:24.369821    8697 client.go:168] LocalClient.Create starting
	I0819 11:21:24.369862    8697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 11:21:24.369912    8697 main.go:141] libmachine: Decoding PEM data...
	I0819 11:21:24.369929    8697 main.go:141] libmachine: Parsing certificate...
	I0819 11:21:24.369974    8697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 11:21:24.370018    8697 main.go:141] libmachine: Decoding PEM data...
	I0819 11:21:24.370032    8697 main.go:141] libmachine: Parsing certificate...
	I0819 11:21:24.370050    8697 main.go:141] libmachine: Running pre-create checks...
	I0819 11:21:24.370060    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .PreCreateCheck
	I0819 11:21:24.370132    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:24.370279    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .GetConfigRaw
	I0819 11:21:24.400753    8697 main.go:141] libmachine: Creating machine...
	I0819 11:21:24.400762    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .Create
	I0819 11:21:24.400846    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:24.400966    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | I0819 11:21:24.400838    8720 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:21:24.401062    8697 main.go:141] libmachine: (force-systemd-env-102000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:21:24.630461    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | I0819 11:21:24.630318    8720 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/id_rsa...
	I0819 11:21:24.891193    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | I0819 11:21:24.891103    8720 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/force-systemd-env-102000.rawdisk...
	I0819 11:21:24.891208    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Writing magic tar header
	I0819 11:21:24.891220    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Writing SSH key tar header
	I0819 11:21:24.891773    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | I0819 11:21:24.891739    8720 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000 ...
	I0819 11:21:25.264423    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:25.264440    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/hyperkit.pid
	I0819 11:21:25.264453    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Using UUID b6228862-70af-4f13-bae7-3e401b056d48
	I0819 11:21:25.289696    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Generated MAC d2:78:da:d1:6a:b7
	I0819 11:21:25.289714    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-102000
	I0819 11:21:25.289758    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b6228862-70af-4f13-bae7-3e401b056d48", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 11:21:25.289786    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b6228862-70af-4f13-bae7-3e401b056d48", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 11:21:25.289844    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b6228862-70af-4f13-bae7-3e401b056d48", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/force-systemd-env-102000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-sys
temd-env-102000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-102000"}
	I0819 11:21:25.289884    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b6228862-70af-4f13-bae7-3e401b056d48 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/force-systemd-env-102000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/bzimage,/Users/jenkins/minikube-integration/19
478-1622/.minikube/machines/force-systemd-env-102000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-102000"
	I0819 11:21:25.289892    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 11:21:25.292734    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 DEBUG: hyperkit: Pid is 8721
	I0819 11:21:25.293290    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 0
	I0819 11:21:25.293311    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:25.293364    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:25.294334    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:25.294434    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:25.294454    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:25.294470    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:25.294486    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:25.294512    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:25.294526    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:25.294538    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:25.294548    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:25.294561    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:25.294575    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:25.294659    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:25.294678    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:25.294686    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:25.294694    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:25.294701    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:25.294708    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:25.294718    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:25.294725    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:25.300513    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 11:21:25.308567    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 11:21:25.309262    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:21:25.309281    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:21:25.309294    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:21:25.309304    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:21:25.687194    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 11:21:25.687222    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 11:21:25.801751    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:21:25.801772    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:21:25.801828    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:21:25.801857    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:21:25.802675    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 11:21:25.802689    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 11:21:27.294757    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 1
	I0819 11:21:27.294773    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:27.294873    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:27.295671    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:27.295724    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:27.295736    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:27.295744    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:27.295751    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:27.295760    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:27.295766    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:27.295772    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:27.295780    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:27.295789    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:27.295795    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:27.295802    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:27.295812    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:27.295820    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:27.295827    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:27.295834    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:27.295844    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:27.295864    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:27.295878    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:29.296826    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 2
	I0819 11:21:29.296844    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:29.296894    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:29.297760    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:29.297831    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:29.297843    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:29.297863    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:29.297873    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:29.297882    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:29.297892    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:29.297900    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:29.297906    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:29.297912    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:29.297920    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:29.297935    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:29.297951    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:29.297959    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:29.297965    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:29.297972    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:29.297979    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:29.297990    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:29.298000    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:31.178083    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 11:21:31.178207    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 11:21:31.178217    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 11:21:31.198284    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:21:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 11:21:31.300179    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 3
	I0819 11:21:31.300203    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:31.300405    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:31.301856    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:31.301983    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:31.302000    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:31.302026    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:31.302047    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:31.302068    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:31.302084    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:31.302113    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:31.302132    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:31.302141    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:31.302152    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:31.302164    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:31.302185    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:31.302201    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:31.302230    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:31.302243    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:31.302252    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:31.302265    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:31.302288    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:33.303016    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 4
	I0819 11:21:33.303034    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:33.303131    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:33.303948    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:33.304001    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:33.304013    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:33.304022    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:33.304028    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:33.304034    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:33.304040    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:33.304047    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:33.304053    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:33.304068    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:33.304077    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:33.304091    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:33.304100    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:33.304109    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:33.304117    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:33.304124    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:33.304132    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:33.304138    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:33.304147    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:35.306189    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 5
	I0819 11:21:35.306204    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:35.306252    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:35.307063    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:35.307114    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:35.307129    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:35.307151    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:35.307158    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:35.307176    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:35.307185    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:35.307196    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:35.307204    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:35.307211    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:35.307222    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:35.307231    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:35.307241    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:35.307249    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:35.307254    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:35.307266    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:35.307280    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:35.307296    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:35.307305    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:37.309320    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 6
	I0819 11:21:37.309334    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:37.309399    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:37.310196    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:37.310248    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:37.310256    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:37.310269    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:37.310295    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:37.310308    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:37.310316    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:37.310322    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:37.310328    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:37.310337    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:37.310346    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:37.310360    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:37.310372    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:37.310380    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:37.310388    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:37.310395    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:37.310404    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:37.310419    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:37.310427    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:39.312436    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 7
	I0819 11:21:39.312452    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:39.312519    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:39.313318    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:39.313365    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:39.313375    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:39.313385    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:39.313392    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:39.313399    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:39.313405    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:39.313419    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:39.313427    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:39.313435    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:39.313441    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:39.313448    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:39.313458    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:39.313466    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:39.313475    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:39.313482    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:39.313490    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:39.313497    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:39.313506    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:41.314572    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 8
	I0819 11:21:41.314586    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:41.314649    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:41.315709    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:41.315777    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:41.315790    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:41.315815    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:41.315823    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:41.315830    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:41.315836    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:41.315842    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:41.315848    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:41.315855    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:41.315864    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:41.315870    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:41.315877    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:41.315883    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:41.315890    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:41.315897    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:41.315905    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:41.315913    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:41.315921    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:43.317945    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 9
	I0819 11:21:43.317957    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:43.318013    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:43.318805    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:43.318859    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:43.318867    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:43.318895    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:43.318908    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:43.318915    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:43.318921    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:43.318944    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:43.318959    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:43.318971    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:43.318979    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:43.318992    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:43.319000    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:43.319009    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:43.319017    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:43.319027    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:43.319035    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:43.319042    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:43.319050    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:45.319733    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 10
	I0819 11:21:45.319746    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:45.319795    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:45.320611    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:45.320647    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:45.320670    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:45.320679    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:45.320699    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:45.320709    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:45.320717    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:45.320724    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:45.320733    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:45.320738    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:45.320745    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:45.320751    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:45.320758    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:45.320766    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:45.320777    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:45.320786    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:45.320794    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:45.320801    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:45.320811    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:47.322922    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 11
	I0819 11:21:47.322946    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:47.322989    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:47.323796    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:47.323850    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:47.323860    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:47.323869    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:47.323876    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:47.323882    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:47.323896    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:47.323910    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:47.323921    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:47.323937    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:47.323946    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:47.323960    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:47.323973    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:47.323986    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:47.323995    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:47.324003    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:47.324012    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:47.324019    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:47.324025    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:49.325812    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 12
	I0819 11:21:49.325828    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:49.325886    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:49.326681    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:49.326725    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:49.326735    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:49.326744    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:49.326749    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:49.326756    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:49.326763    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:49.326770    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:49.326777    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:49.326783    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:49.326788    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:49.326794    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:49.326802    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:49.326809    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:49.326815    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:49.326822    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:49.326843    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:49.326851    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:49.326860    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:51.327011    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 13
	I0819 11:21:51.327023    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:51.327094    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:51.327917    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:51.327947    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:51.327959    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:51.327983    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:51.327991    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:51.327998    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:51.328006    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:51.328013    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:51.328021    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:51.328036    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:51.328051    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:51.328061    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:51.328069    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:51.328076    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:51.328084    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:51.328092    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:51.328100    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:51.328107    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:51.328115    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:53.330191    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 14
	I0819 11:21:53.330205    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:53.330259    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:53.331102    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:53.331152    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:53.331174    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:53.331195    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:53.331208    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:53.331216    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:53.331225    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:53.331239    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:53.331248    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:53.331259    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:53.331267    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:53.331275    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:53.331283    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:53.331291    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:53.331297    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:53.331304    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:53.331312    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:53.331319    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:53.331328    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:55.331566    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 15
	I0819 11:21:55.331579    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:55.331652    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:55.332442    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:55.332490    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:55.332510    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:55.332533    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:55.332546    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:55.332556    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:55.332564    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:55.332572    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:55.332581    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:55.332589    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:55.332597    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:55.332604    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:55.332611    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:55.332617    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:55.332623    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:55.332632    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:55.332641    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:55.332648    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:55.332654    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:57.333475    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 16
	I0819 11:21:57.333489    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:57.333554    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:57.334336    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:57.334398    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:57.334412    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:57.334427    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:57.334433    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:57.334440    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:57.334446    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:57.334454    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:57.334461    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:57.334469    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:57.334476    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:57.334483    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:57.334491    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:57.334505    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:57.334515    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:57.334539    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:57.334568    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:57.334575    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:57.334582    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:21:59.336605    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 17
	I0819 11:21:59.336620    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:21:59.336718    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:21:59.337509    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:21:59.337590    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:21:59.337605    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:21:59.337624    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:21:59.337635    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:21:59.337650    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:21:59.337662    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:21:59.337670    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:21:59.337678    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:21:59.337685    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:21:59.337693    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:21:59.337700    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:21:59.337707    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:21:59.337713    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:21:59.337723    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:21:59.337731    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:21:59.337741    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:21:59.337751    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:21:59.337765    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:01.339193    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 18
	I0819 11:22:01.339205    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:01.339274    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:01.340134    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:01.340180    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:01.340194    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:01.340207    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:01.340215    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:01.340228    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:01.340245    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:01.340265    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:01.340276    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:01.340291    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:01.340303    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:01.340311    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:01.340319    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:01.340331    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:01.340341    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:01.340357    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:01.340370    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:01.340387    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:01.340400    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:03.342393    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 19
	I0819 11:22:03.342409    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:03.342442    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:03.343334    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:03.343377    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:03.343388    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:03.343398    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:03.343404    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:03.343439    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:03.343451    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:03.343459    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:03.343467    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:03.343475    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:03.343490    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:03.343506    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:03.343520    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:03.343528    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:03.343538    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:03.343546    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:03.343554    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:03.343562    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:03.343571    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:05.344531    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 20
	I0819 11:22:05.344554    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:05.344616    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:05.345408    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:05.345464    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:05.345476    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:05.345486    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:05.345497    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:05.345504    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:05.345510    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:05.345525    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:05.345539    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:05.345547    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:05.345553    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:05.345561    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:05.345570    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:05.345585    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:05.345597    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:05.345615    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:05.345628    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:05.345637    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:05.345645    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:07.346251    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 21
	I0819 11:22:07.346262    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:07.346329    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:07.347133    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:07.347191    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:07.347206    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:07.347218    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:07.347226    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:07.347247    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:07.347257    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:07.347264    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:07.347272    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:07.347279    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:07.347286    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:07.347306    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:07.347316    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:07.347324    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:07.347330    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:07.347336    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:07.347343    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:07.347357    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:07.347372    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:09.347375    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 22
	I0819 11:22:09.347391    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:09.347418    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:09.348286    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:09.348330    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:09.348351    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:09.348365    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:09.348378    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:09.348389    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:09.348397    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:09.348406    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:09.348422    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:09.348435    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:09.348443    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:09.348452    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:09.348483    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:09.348508    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:09.348540    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:09.348546    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:09.348552    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:09.348558    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:09.348565    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:11.350029    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 23
	I0819 11:22:11.350045    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:11.350116    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:11.350908    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:11.350956    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:11.350966    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:11.350974    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:11.350983    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:11.350992    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:11.350998    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:11.351005    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:11.351011    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:11.351023    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:11.351037    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:11.351053    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:11.351077    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:11.351084    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:11.351095    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:11.351101    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:11.351108    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:11.351116    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:11.351131    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:13.353161    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 24
	I0819 11:22:13.353180    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:13.353227    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:13.354169    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:13.354209    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:13.354227    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:13.354240    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:13.354248    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:13.354272    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:13.354285    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:13.354292    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:13.354315    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:13.354328    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:13.354336    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:13.354344    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:13.354351    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:13.354359    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:13.354388    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:13.354408    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:13.354429    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:13.354459    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:13.354474    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:15.354577    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 25
	I0819 11:22:15.354592    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:15.354661    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:15.355722    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:15.355771    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:15.355783    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:15.355804    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:15.355814    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:15.355821    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:15.355831    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:15.355838    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:15.355844    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:15.355851    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:15.355859    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:15.355866    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:15.355875    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:15.355889    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:15.355897    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:15.355905    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:15.355912    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:15.355918    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:15.355923    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:17.356157    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 26
	I0819 11:22:17.356173    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:17.356241    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:17.357070    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:17.357112    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:17.357132    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:17.357142    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:17.357149    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:17.357155    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:17.357162    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:17.357168    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:17.357184    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:17.357195    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:17.357203    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:17.357211    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:17.357219    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:17.357226    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:17.357232    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:17.357238    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:17.357246    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:17.357255    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:17.357263    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:19.359352    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 27
	I0819 11:22:19.359368    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:19.359406    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:19.360544    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:19.360599    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:19.360610    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:19.360631    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:19.360642    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:19.360657    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:19.360671    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:19.360680    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:19.360689    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:19.360697    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:19.360704    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:19.360712    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:19.360719    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:19.360730    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:19.360742    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:19.360755    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:19.360765    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:19.360772    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:19.360780    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:21.362760    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 28
	I0819 11:22:21.362784    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:21.362829    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:21.363879    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:21.363920    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:21.363928    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:21.363940    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:21.363947    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:21.363953    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:21.363960    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:21.363967    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:21.363975    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:21.363992    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:21.364005    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:21.364016    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:21.364029    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:21.364036    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:21.364044    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:21.364055    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:21.364063    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:21.364070    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:21.364077    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:23.365101    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 29
	I0819 11:22:23.365120    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:23.365197    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:23.366013    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for d2:78:da:d1:6a:b7 in /var/db/dhcpd_leases ...
	I0819 11:22:23.366077    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:22:23.366087    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:22:23.366097    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:22:23.366108    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:22:23.366119    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:22:23.366127    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:22:23.366135    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:22:23.366142    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:22:23.366149    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:22:23.366157    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:22:23.366169    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:22:23.366179    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:22:23.366189    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:22:23.366198    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:22:23.366205    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:22:23.366213    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:22:23.366224    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:22:23.366232    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:22:25.367918    8697 client.go:171] duration metric: took 1m0.997849463s to LocalClient.Create
	I0819 11:22:27.370015    8697 start.go:128] duration metric: took 1m3.032479995s to createHost
	I0819 11:22:27.370027    8697 start.go:83] releasing machines lock for "force-systemd-env-102000", held for 1m3.03260791s
	W0819 11:22:27.370043    8697 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d2:78:da:d1:6a:b7
	I0819 11:22:27.370374    8697 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:22:27.370399    8697 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:22:27.379349    8697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53767
	I0819 11:22:27.379825    8697 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:22:27.380332    8697 main.go:141] libmachine: Using API Version  1
	I0819 11:22:27.380344    8697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:22:27.380561    8697 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:22:27.380920    8697 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:22:27.380963    8697 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:22:27.389301    8697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53769
	I0819 11:22:27.389764    8697 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:22:27.390279    8697 main.go:141] libmachine: Using API Version  1
	I0819 11:22:27.390293    8697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:22:27.390597    8697 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:22:27.390724    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .GetState
	I0819 11:22:27.390809    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:27.390874    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:27.391890    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .DriverName
	I0819 11:22:27.433411    8697 out.go:177] * Deleting "force-systemd-env-102000" in hyperkit ...
	I0819 11:22:27.475369    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .Remove
	I0819 11:22:27.475497    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:27.475515    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:27.475576    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:27.476533    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:27.476594    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | waiting for graceful shutdown
	I0819 11:22:28.477595    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:28.477674    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:28.478601    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | waiting for graceful shutdown
	I0819 11:22:29.480743    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:29.480827    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:29.482518    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | waiting for graceful shutdown
	I0819 11:22:30.483722    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:30.483788    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:30.484556    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | waiting for graceful shutdown
	I0819 11:22:31.485187    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:31.485266    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:31.485828    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | waiting for graceful shutdown
	I0819 11:22:32.486624    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:32.486732    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8721
	I0819 11:22:32.487826    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | sending sigkill
	I0819 11:22:32.487837    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:22:32.499212    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:22:32 WARN : hyperkit: failed to read stderr: EOF
	I0819 11:22:32.499228    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:22:32 WARN : hyperkit: failed to read stdout: EOF
	W0819 11:22:32.512970    8697 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d2:78:da:d1:6a:b7
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d2:78:da:d1:6a:b7
	I0819 11:22:32.512988    8697 start.go:729] Will try again in 5 seconds ...
	I0819 11:22:37.514100    8697 start.go:360] acquireMachinesLock for force-systemd-env-102000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:23:30.184201    8697 start.go:364] duration metric: took 52.693310207s to acquireMachinesLock for "force-systemd-env-102000"
	I0819 11:23:30.184249    8697 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-102000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-102000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:23:30.184304    8697 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 11:23:30.226473    8697 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:23:30.226544    8697 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:23:30.226567    8697 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:23:30.235083    8697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53773
	I0819 11:23:30.235429    8697 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:23:30.235829    8697 main.go:141] libmachine: Using API Version  1
	I0819 11:23:30.235859    8697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:23:30.236090    8697 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:23:30.236224    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .GetMachineName
	I0819 11:23:30.236320    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .DriverName
	I0819 11:23:30.236414    8697 start.go:159] libmachine.API.Create for "force-systemd-env-102000" (driver="hyperkit")
	I0819 11:23:30.236439    8697 client.go:168] LocalClient.Create starting
	I0819 11:23:30.236467    8697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 11:23:30.236520    8697 main.go:141] libmachine: Decoding PEM data...
	I0819 11:23:30.236534    8697 main.go:141] libmachine: Parsing certificate...
	I0819 11:23:30.236578    8697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 11:23:30.236617    8697 main.go:141] libmachine: Decoding PEM data...
	I0819 11:23:30.236628    8697 main.go:141] libmachine: Parsing certificate...
	I0819 11:23:30.236643    8697 main.go:141] libmachine: Running pre-create checks...
	I0819 11:23:30.236648    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .PreCreateCheck
	I0819 11:23:30.236721    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:30.236759    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .GetConfigRaw
	I0819 11:23:30.268356    8697 main.go:141] libmachine: Creating machine...
	I0819 11:23:30.268365    8697 main.go:141] libmachine: (force-systemd-env-102000) Calling .Create
	I0819 11:23:30.268465    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:30.268600    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | I0819 11:23:30.268452    8761 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 11:23:30.268664    8697 main.go:141] libmachine: (force-systemd-env-102000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 11:23:30.576805    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | I0819 11:23:30.576712    8761 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/id_rsa...
	I0819 11:23:30.654531    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | I0819 11:23:30.654467    8761 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/force-systemd-env-102000.rawdisk...
	I0819 11:23:30.654548    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Writing magic tar header
	I0819 11:23:30.654572    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Writing SSH key tar header
	I0819 11:23:30.674396    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | I0819 11:23:30.674361    8761 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000 ...
	I0819 11:23:31.052770    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:31.052792    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/hyperkit.pid
	I0819 11:23:31.052832    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Using UUID 1556753c-e651-4901-a170-f096b88a4d9b
	I0819 11:23:31.079358    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Generated MAC a6:5c:ce:7e:b3:aa
	I0819 11:23:31.079374    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-102000
	I0819 11:23:31.079408    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1556753c-e651-4901-a170-f096b88a4d9b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 11:23:31.079440    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1556753c-e651-4901-a170-f096b88a4d9b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 11:23:31.079486    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1556753c-e651-4901-a170-f096b88a4d9b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/force-systemd-env-102000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-sys
temd-env-102000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-102000"}
	I0819 11:23:31.079519    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1556753c-e651-4901-a170-f096b88a4d9b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/force-systemd-env-102000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/bzimage,/Users/jenkins/minikube-integration/19
478-1622/.minikube/machines/force-systemd-env-102000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-102000"
	I0819 11:23:31.079568    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 11:23:31.082490    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 DEBUG: hyperkit: Pid is 8771
	I0819 11:23:31.083648    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 0
	I0819 11:23:31.083664    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:31.083728    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:31.084731    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:31.084816    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:31.084852    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:31.084879    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:31.084893    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:31.084907    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:31.084923    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:31.084936    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:31.084952    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:31.084968    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:31.084988    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:31.084997    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:31.085010    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:31.085024    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:31.085037    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:31.085071    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:31.085090    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:31.085104    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:31.085116    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:31.090339    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 11:23:31.099050    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/force-systemd-env-102000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 11:23:31.099861    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:23:31.099912    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:23:31.099927    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:23:31.099942    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:23:31.478262    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 11:23:31.478277    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 11:23:31.592899    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 11:23:31.592917    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 11:23:31.592942    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 11:23:31.592962    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 11:23:31.593791    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 11:23:31.593803    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 11:23:33.085282    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 1
	I0819 11:23:33.085298    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:33.085373    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:33.086208    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:33.086269    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:33.086281    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:33.086295    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:33.086307    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:33.086315    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:33.086322    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:33.086345    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:33.086354    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:33.086374    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:33.086383    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:33.086390    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:33.086400    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:33.086410    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:33.086418    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:33.086425    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:33.086436    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:33.086444    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:33.086453    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:35.086087    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 2
	I0819 11:23:35.086101    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:35.086159    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:35.086974    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:35.087016    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:35.087024    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:35.087035    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:35.087043    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:35.087049    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:35.087055    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:35.087078    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:35.087103    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:35.087120    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:35.087127    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:35.087135    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:35.087141    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:35.087154    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:35.087167    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:35.087181    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:35.087193    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:35.087222    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:35.087234    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:36.974659    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 11:23:36.974804    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 11:23:36.974813    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 11:23:36.994598    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | 2024/08/19 11:23:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 11:23:37.087105    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 3
	I0819 11:23:37.087135    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:37.087266    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:37.088726    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:37.088879    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:37.088899    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:37.088922    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:37.088934    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:37.088986    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:37.088999    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:37.089014    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:37.089023    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:37.089040    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:37.089054    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:37.089078    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:37.089101    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:37.089122    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:37.089139    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:37.089151    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:37.089161    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:37.089177    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:37.089191    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:39.089166    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 4
	I0819 11:23:39.089181    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:39.089260    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:39.090058    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:39.090108    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:39.090118    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:39.090127    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:39.090136    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:39.090146    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:39.090155    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:39.090165    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:39.090175    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:39.090183    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:39.090191    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:39.090198    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:39.090206    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:39.090212    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:39.090225    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:39.090235    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:39.090247    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:39.090269    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:39.090283    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:41.091062    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 5
	I0819 11:23:41.091077    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:41.091130    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:41.091974    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:41.092014    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:41.092026    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:41.092036    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:41.092044    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:41.092051    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:41.092058    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:41.092074    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:41.092090    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:41.092098    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:41.092107    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:41.092118    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:41.092128    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:41.092138    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:41.092146    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:41.092153    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:41.092160    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:41.092175    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:41.092187    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:43.092869    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 6
	I0819 11:23:43.092881    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:43.092941    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:43.093801    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:43.093863    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:43.093877    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:43.093885    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:43.093890    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:43.093922    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:43.093932    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:43.093938    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:43.093949    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:43.093957    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:43.093965    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:43.093972    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:43.093978    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:43.093984    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:43.093991    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:43.093999    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:43.094006    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:43.094021    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:43.094030    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:45.095610    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 7
	I0819 11:23:45.095626    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:45.095685    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:45.096559    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:45.096601    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:45.096611    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:45.096630    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:45.096639    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:45.096647    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:45.096661    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:45.096674    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:45.096682    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:45.096690    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:45.096697    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:45.096703    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:45.096717    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:45.096729    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:45.096740    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:45.096747    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:45.096754    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:45.096763    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:45.096778    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:47.098428    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 8
	I0819 11:23:47.098441    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:47.098483    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:47.099317    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:47.099358    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:47.099371    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:47.099380    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:47.099390    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:47.099398    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:47.099406    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:47.099413    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:47.099421    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:47.099436    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:47.099448    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:47.099457    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:47.099463    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:47.099472    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:47.099480    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:47.099487    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:47.099494    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:47.099500    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:47.099508    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:49.100992    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 9
	I0819 11:23:49.101008    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:49.101056    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:49.101936    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:49.101962    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:49.101974    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:49.101987    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:49.101994    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:49.102006    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:49.102015    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:49.102022    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:49.102028    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:49.102035    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:49.102042    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:49.102049    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:49.102057    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:49.102073    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:49.102086    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:49.102094    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:49.102102    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:49.102109    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:49.102117    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:51.102421    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 10
	I0819 11:23:51.102434    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:51.102506    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:51.103299    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:51.103347    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:51.103355    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:51.103363    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:51.103370    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:51.103377    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:51.103384    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:51.103391    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:51.103406    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:51.103414    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:51.103421    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:51.103427    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:51.103434    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:51.103443    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:51.103464    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:51.103477    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:51.103485    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:51.103494    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:51.103506    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:53.103986    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 11
	I0819 11:23:53.103998    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:53.104068    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:53.104938    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:53.104993    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:53.105012    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:53.105032    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:53.105044    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:53.105052    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:53.105061    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:53.105068    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:53.105074    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:53.105089    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:53.105108    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:53.105122    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:53.105135    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:53.105143    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:53.105151    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:53.105158    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:53.105164    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:53.105170    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:53.105179    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:55.106934    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 12
	I0819 11:23:55.106950    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:55.107005    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:55.107803    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:55.107851    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:55.107863    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:55.107872    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:55.107878    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:55.107885    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:55.107901    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:55.107914    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:55.107925    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:55.107935    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:55.107946    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:55.107956    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:55.107963    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:55.107971    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:55.107981    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:55.107989    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:55.107996    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:55.108002    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:55.108010    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:57.109865    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 13
	I0819 11:23:57.109880    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:57.109952    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:57.110726    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:57.110775    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:57.110786    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:57.110794    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:57.110801    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:57.110815    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:57.110824    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:57.110830    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:57.110840    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:57.110851    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:57.110860    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:57.110867    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:57.110875    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:57.110881    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:57.110895    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:57.110902    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:57.110910    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:57.110917    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:57.110923    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:23:59.112781    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 14
	I0819 11:23:59.112795    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:23:59.112866    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:23:59.113654    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:23:59.113708    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:23:59.113720    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:23:59.113740    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:23:59.113749    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:23:59.113756    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:23:59.113765    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:23:59.113772    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:23:59.113779    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:23:59.113800    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:23:59.113811    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:23:59.113819    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:23:59.113827    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:23:59.113834    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:23:59.113841    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:23:59.113850    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:23:59.113857    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:23:59.113865    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:23:59.113872    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:01.114450    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 15
	I0819 11:24:01.114466    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:01.114524    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:01.115347    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:01.115401    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:01.115411    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:01.115424    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:01.115432    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:01.115446    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:01.115459    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:01.115467    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:01.115475    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:01.115498    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:01.115512    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:01.115522    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:01.115531    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:01.115538    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:01.115547    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:01.115561    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:01.115577    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:01.115589    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:01.115596    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:03.116185    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 16
	I0819 11:24:03.116201    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:03.116262    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:03.117064    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:03.117113    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:03.117135    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:03.117147    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:03.117157    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:03.117164    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:03.117171    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:03.117186    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:03.117198    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:03.117210    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:03.117218    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:03.117226    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:03.117233    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:03.117241    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:03.117258    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:03.117265    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:03.117273    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:03.117280    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:03.117287    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:05.118674    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 17
	I0819 11:24:05.118689    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:05.118739    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:05.119544    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:05.119591    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:05.119600    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:05.119612    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:05.119619    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:05.119626    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:05.119632    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:05.119638    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:05.119645    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:05.119652    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:05.119659    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:05.119674    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:05.119683    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:05.119692    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:05.119715    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:05.119728    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:05.119744    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:05.119756    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:05.119765    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:07.120001    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 18
	I0819 11:24:07.120014    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:07.120073    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:07.120921    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:07.120973    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:07.120985    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:07.120995    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:07.121001    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:07.121014    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:07.121024    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:07.121040    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:07.121052    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:07.121061    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:07.121071    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:07.121078    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:07.121086    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:07.121093    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:07.121100    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:07.121134    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:07.121148    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:07.121156    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:07.121175    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:09.123050    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 19
	I0819 11:24:09.123062    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:09.123131    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:09.123952    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:09.123974    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:09.123991    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:09.124005    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:09.124015    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:09.124024    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:09.124032    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:09.124039    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:09.124046    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:09.124055    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:09.124065    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:09.124072    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:09.124080    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:09.124088    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:09.124095    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:09.124102    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:09.124110    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:09.124125    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:09.124136    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:11.126026    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 20
	I0819 11:24:11.126043    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:11.126083    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:11.126980    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:11.127016    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:11.127024    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:11.127037    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:11.127046    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:11.127053    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:11.127060    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:11.127067    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:11.127073    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:11.127086    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:11.127095    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:11.127102    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:11.127108    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:11.127114    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:11.127122    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:11.127129    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:11.127136    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:11.127142    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:11.127150    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:13.129147    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 21
	I0819 11:24:13.129161    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:13.129251    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:13.130333    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:13.130380    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:13.130393    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:13.130410    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:13.130434    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:13.130450    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:13.130464    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:13.130478    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:13.130487    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:13.130498    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:13.130506    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:13.130514    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:13.130523    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:13.130532    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:13.130538    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:13.130545    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:13.130553    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:13.130567    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:13.130578    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:15.132502    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 22
	I0819 11:24:15.132517    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:15.132567    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:15.133366    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:15.133428    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:15.133438    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:15.133451    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:15.133461    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:15.133469    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:15.133477    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:15.133484    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:15.133493    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:15.133507    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:15.133523    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:15.133532    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:15.133541    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:15.133548    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:15.133556    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:15.133562    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:15.133571    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:15.133579    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:15.133592    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:17.135109    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 23
	I0819 11:24:17.135124    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:17.135159    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:17.136168    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:17.136201    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:17.136210    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:17.136227    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:17.136235    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:17.136241    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:17.136249    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:17.136255    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:17.136262    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:17.136269    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:17.136277    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:17.136283    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:17.136296    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:17.136310    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:17.136325    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:17.136337    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:17.136355    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:17.136368    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:17.136378    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:19.138358    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 24
	I0819 11:24:19.138373    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:19.138417    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:19.139252    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:19.139317    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:19.139332    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:19.139342    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:19.139349    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:19.139359    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:19.139370    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:19.139378    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:19.139386    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:19.139401    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:19.139416    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:19.139423    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:19.139430    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:19.139445    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:19.139458    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:19.139469    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:19.139478    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:19.139485    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:19.139496    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:21.141473    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 25
	I0819 11:24:21.141488    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:21.141533    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:21.142319    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:21.142376    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:21.142388    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:21.142397    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:21.142403    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:21.142410    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:21.142416    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:21.142429    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:21.142442    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:21.142470    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:21.142482    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:21.142495    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:21.142504    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:21.142511    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:21.142518    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:21.142524    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:21.142553    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:21.142565    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:21.142575    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:23.143021    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 26
	I0819 11:24:23.143033    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:23.143091    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:23.143874    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:23.143935    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:23.143947    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:23.143962    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:23.143972    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:23.143986    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:23.143995    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:23.144002    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:23.144016    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:23.144036    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:23.144050    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:23.144060    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:23.144069    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:23.144076    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:23.144084    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:23.144106    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:23.144118    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:23.144127    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:23.144133    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:25.145429    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 27
	I0819 11:24:25.145444    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:25.145490    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:25.146324    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:25.146377    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:25.146387    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:25.146394    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:25.146401    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:25.146418    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:25.146432    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:25.146441    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:25.146450    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:25.146457    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:25.146465    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:25.146472    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:25.146478    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:25.146484    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:25.146496    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:25.146511    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:25.146519    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:25.146534    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:25.146544    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:27.148516    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 28
	I0819 11:24:27.148530    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:27.148576    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:27.149386    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:27.149438    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:27.149449    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:27.149460    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:27.149468    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:27.149477    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:27.149494    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:27.149505    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:27.149512    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:27.149524    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:27.149532    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:27.149541    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:27.149552    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:27.149559    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:27.149569    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:27.149575    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:27.149582    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:27.149588    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:27.149597    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:29.151600    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Attempt 29
	I0819 11:24:29.151615    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:24:29.151662    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | hyperkit pid from json: 8771
	I0819 11:24:29.152493    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Searching for a6:5c:ce:7e:b3:aa in /var/db/dhcpd_leases ...
	I0819 11:24:29.152528    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0819 11:24:29.152535    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:b2:15:5f:e8:63:75 ID:1,b2:15:5f:e8:63:75 Lease:0x66c4de04}
	I0819 11:24:29.152551    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:4e:fd:71:16:86:c5 ID:1,4e:fd:71:16:86:c5 Lease:0x66c4dd2d}
	I0819 11:24:29.152558    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:56:71:77:7f:5a:ba ID:1,56:71:77:7f:5a:ba Lease:0x66c38b10}
	I0819 11:24:29.152572    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:6:6:7f:7b:24:3d ID:1,6:6:7f:7b:24:3d Lease:0x66c38a6e}
	I0819 11:24:29.152587    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:52:d7:99:cc:57:a9 ID:1,52:d7:99:cc:57:a9 Lease:0x66c4dc46}
	I0819 11:24:29.152601    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:32:31:13:c5:ac:dc ID:1,32:31:13:c5:ac:dc Lease:0x66c4dc0a}
	I0819 11:24:29.152617    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ca:eb:4d:55:4e:8d ID:1,ca:eb:4d:55:4e:8d Lease:0x66c4d9c3}
	I0819 11:24:29.152629    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:6:81:6b:7c:8b:5c ID:1,6:81:6b:7c:8b:5c Lease:0x66c4d99b}
	I0819 11:24:29.152638    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:de:a8:91:84:9a:51 ID:1,de:a8:91:84:9a:51 Lease:0x66c4d942}
	I0819 11:24:29.152651    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:c4:db:dc:eb:79 ID:1,72:c4:db:dc:eb:79 Lease:0x66c4d912}
	I0819 11:24:29.152659    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 11:24:29.152667    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 11:24:29.152674    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d8d7}
	I0819 11:24:29.152680    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 11:24:29.152687    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 11:24:29.152695    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 11:24:29.152704    8697 main.go:141] libmachine: (force-systemd-env-102000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 11:24:31.154728    8697 client.go:171] duration metric: took 1m0.926855203s to LocalClient.Create
	I0819 11:24:33.156771    8697 start.go:128] duration metric: took 1m2.981078041s to createHost
	I0819 11:24:33.156788    8697 start.go:83] releasing machines lock for "force-systemd-env-102000", held for 1m2.981203882s
	W0819 11:24:33.156918    8697 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-102000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a6:5c:ce:7e:b3:aa
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-102000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a6:5c:ce:7e:b3:aa
	I0819 11:24:33.199100    8697 out.go:201] 
	W0819 11:24:33.222174    8697 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a6:5c:ce:7e:b3:aa
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a6:5c:ce:7e:b3:aa
	W0819 11:24:33.222190    8697 out.go:270] * 
	* 
	W0819 11:24:33.222899    8697 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:24:33.284990    8697 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-102000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-102000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-102000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (183.696438ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-102000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-102000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-19 11:24:33.581235 -0700 PDT m=+5577.351116337
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-102000 -n force-systemd-env-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-102000 -n force-systemd-env-102000: exit status 7 (80.222905ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 11:24:33.659507    8797 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0819 11:24:33.659529    8797 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-102000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-102000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-102000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-102000: (5.244627592s)
--- FAIL: TestForceSystemdEnv (234.27s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (194.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-622000 --alsologtostderr -v=8
E0819 10:03:12.761073    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-622000 --alsologtostderr -v=8: exit status 90 (1m13.656835985s)

                                                
                                                
-- stdout --
	* [functional-622000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "functional-622000" primary control-plane node in "functional-622000" cluster
	* Updating the running hyperkit "functional-622000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:02:46.715279    3149 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:02:46.715467    3149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:02:46.715473    3149 out.go:358] Setting ErrFile to fd 2...
	I0819 10:02:46.715476    3149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:02:46.715649    3149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:02:46.717106    3149 out.go:352] Setting JSON to false
	I0819 10:02:46.739543    3149 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1936,"bootTime":1724085030,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:02:46.739637    3149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:02:46.761631    3149 out.go:177] * [functional-622000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:02:46.804362    3149 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:02:46.804421    3149 notify.go:220] Checking for updates...
	I0819 10:02:46.847125    3149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:02:46.868395    3149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:02:46.889188    3149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:02:46.931247    3149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:02:46.952016    3149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:02:46.974016    3149 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:02:46.974175    3149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:02:46.974828    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:46.974917    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:02:46.984546    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50192
	I0819 10:02:46.984906    3149 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:02:46.985340    3149 main.go:141] libmachine: Using API Version  1
	I0819 10:02:46.985351    3149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:02:46.985609    3149 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:02:46.985745    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.014206    3149 out.go:177] * Using the hyperkit driver based on existing profile
	I0819 10:02:47.056388    3149 start.go:297] selected driver: hyperkit
	I0819 10:02:47.056417    3149 start.go:901] validating driver "hyperkit" against &{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:02:47.056645    3149 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:02:47.056829    3149 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:02:47.057043    3149 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:02:47.066748    3149 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:02:47.070635    3149 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:47.070656    3149 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:02:47.073332    3149 cni.go:84] Creating CNI manager for ""
	I0819 10:02:47.073357    3149 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 10:02:47.073438    3149 start.go:340] cluster config:
	{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:02:47.073535    3149 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:02:47.116046    3149 out.go:177] * Starting "functional-622000" primary control-plane node in "functional-622000" cluster
	I0819 10:02:47.137321    3149 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:02:47.137398    3149 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:02:47.137437    3149 cache.go:56] Caching tarball of preloaded images
	I0819 10:02:47.137630    3149 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:02:47.137652    3149 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:02:47.137794    3149 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/config.json ...
	I0819 10:02:47.138761    3149 start.go:360] acquireMachinesLock for functional-622000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:02:47.138881    3149 start.go:364] duration metric: took 95.93µs to acquireMachinesLock for "functional-622000"
	I0819 10:02:47.138927    3149 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:02:47.138944    3149 fix.go:54] fixHost starting: 
	I0819 10:02:47.139354    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:47.139383    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:02:47.148422    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50194
	I0819 10:02:47.148784    3149 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:02:47.149127    3149 main.go:141] libmachine: Using API Version  1
	I0819 10:02:47.149154    3149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:02:47.149416    3149 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:02:47.149542    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.149650    3149 main.go:141] libmachine: (functional-622000) Calling .GetState
	I0819 10:02:47.149730    3149 main.go:141] libmachine: (functional-622000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:02:47.149822    3149 main.go:141] libmachine: (functional-622000) DBG | hyperkit pid from json: 3102
	I0819 10:02:47.150790    3149 fix.go:112] recreateIfNeeded on functional-622000: state=Running err=<nil>
	W0819 10:02:47.150805    3149 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:02:47.172224    3149 out.go:177] * Updating the running hyperkit "functional-622000" VM ...
	I0819 10:02:47.193060    3149 machine.go:93] provisionDockerMachine start ...
	I0819 10:02:47.193093    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.193438    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.193671    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.193895    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.194183    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.194389    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.194647    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.194938    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.194949    3149 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:02:47.257006    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622000
	
	I0819 10:02:47.257020    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.257151    3149 buildroot.go:166] provisioning hostname "functional-622000"
	I0819 10:02:47.257163    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.257264    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.257362    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.257459    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.257534    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.257627    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.257768    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.257923    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.257933    3149 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-622000 && echo "functional-622000" | sudo tee /etc/hostname
	I0819 10:02:47.330881    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622000
	
	I0819 10:02:47.330901    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.331043    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.331162    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.331251    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.331340    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.331465    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.331608    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.331620    3149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-622000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-622000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-622000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:02:47.392695    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:02:47.392714    3149 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:02:47.392730    3149 buildroot.go:174] setting up certificates
	I0819 10:02:47.392736    3149 provision.go:84] configureAuth start
	I0819 10:02:47.392747    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.392879    3149 main.go:141] libmachine: (functional-622000) Calling .GetIP
	I0819 10:02:47.392977    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.393055    3149 provision.go:143] copyHostCerts
	I0819 10:02:47.393086    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:02:47.393160    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:02:47.393169    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:02:47.393370    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:02:47.393581    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:02:47.393621    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:02:47.393626    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:02:47.393737    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:02:47.393914    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:02:47.393957    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:02:47.393962    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:02:47.394039    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:02:47.394180    3149 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.functional-622000 san=[127.0.0.1 192.169.0.4 functional-622000 localhost minikube]
	I0819 10:02:47.551861    3149 provision.go:177] copyRemoteCerts
	I0819 10:02:47.551924    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:02:47.551939    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.552077    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.552163    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.552249    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.552354    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:47.590340    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:02:47.590426    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:02:47.611171    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:02:47.611243    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 10:02:47.631670    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:02:47.631735    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:02:47.651195    3149 provision.go:87] duration metric: took 258.447258ms to configureAuth
	I0819 10:02:47.651207    3149 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:02:47.651340    3149 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:02:47.651354    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.651503    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.651612    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.651695    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.651787    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.651883    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.652007    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.652132    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.652140    3149 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:02:47.713196    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:02:47.713207    3149 buildroot.go:70] root file system type: tmpfs
	I0819 10:02:47.713274    3149 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:02:47.713289    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.713416    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.713502    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.713589    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.713668    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.713818    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.713957    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.714002    3149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:02:47.788841    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:02:47.788868    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.789014    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.789110    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.789218    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.789323    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.789459    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.789600    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.789615    3149 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:02:47.859208    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:02:47.859221    3149 machine.go:96] duration metric: took 666.140503ms to provisionDockerMachine
	I0819 10:02:47.859235    3149 start.go:293] postStartSetup for "functional-622000" (driver="hyperkit")
	I0819 10:02:47.859243    3149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:02:47.859253    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.859433    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:02:47.859447    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.859550    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.859628    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.859723    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.859805    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:47.897960    3149 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:02:47.900903    3149 command_runner.go:130] > NAME=Buildroot
	I0819 10:02:47.900911    3149 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 10:02:47.900915    3149 command_runner.go:130] > ID=buildroot
	I0819 10:02:47.900919    3149 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 10:02:47.900923    3149 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 10:02:47.901013    3149 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:02:47.901024    3149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:02:47.901125    3149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:02:47.901317    3149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:02:47.901324    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:02:47.901516    3149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts -> hosts in /etc/test/nested/copy/2174
	I0819 10:02:47.901521    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts -> /etc/test/nested/copy/2174/hosts
	I0819 10:02:47.901573    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2174
	I0819 10:02:47.908902    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:02:47.928770    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts --> /etc/test/nested/copy/2174/hosts (40 bytes)
	I0819 10:02:47.949590    3149 start.go:296] duration metric: took 90.345683ms for postStartSetup
	I0819 10:02:47.949608    3149 fix.go:56] duration metric: took 810.670757ms for fixHost
	I0819 10:02:47.949626    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.949765    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.949853    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.949932    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.950014    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.950145    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.950278    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.950285    3149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:02:48.015962    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724086968.201080300
	
	I0819 10:02:48.015973    3149 fix.go:216] guest clock: 1724086968.201080300
	I0819 10:02:48.015979    3149 fix.go:229] Guest: 2024-08-19 10:02:48.2010803 -0700 PDT Remote: 2024-08-19 10:02:47.949616 -0700 PDT m=+1.269337789 (delta=251.4643ms)
	I0819 10:02:48.015999    3149 fix.go:200] guest clock delta is within tolerance: 251.4643ms
	I0819 10:02:48.016003    3149 start.go:83] releasing machines lock for "functional-622000", held for 877.108871ms
	I0819 10:02:48.016022    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016177    3149 main.go:141] libmachine: (functional-622000) Calling .GetIP
	I0819 10:02:48.016275    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016589    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016695    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016767    3149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:02:48.016795    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:48.016806    3149 ssh_runner.go:195] Run: cat /version.json
	I0819 10:02:48.016817    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:48.016882    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:48.016971    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:48.016990    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:48.017080    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:48.017101    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:48.017164    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:48.017193    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:48.017328    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:48.049603    3149 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 10:02:48.049829    3149 ssh_runner.go:195] Run: systemctl --version
	I0819 10:02:48.095984    3149 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 10:02:48.096931    3149 command_runner.go:130] > systemd 252 (252)
	I0819 10:02:48.096961    3149 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 10:02:48.097053    3149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:02:48.102122    3149 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 10:02:48.102143    3149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:02:48.102177    3149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:02:48.110952    3149 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 10:02:48.110963    3149 start.go:495] detecting cgroup driver to use...
	I0819 10:02:48.111059    3149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:02:48.126457    3149 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0819 10:02:48.126734    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:02:48.135958    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:02:48.145231    3149 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:02:48.145276    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:02:48.154341    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:02:48.163160    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:02:48.171882    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:02:48.181115    3149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:02:48.190524    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:02:48.200851    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:02:48.209942    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:02:48.219031    3149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:02:48.227175    3149 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 10:02:48.227346    3149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:02:48.235625    3149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:02:48.388843    3149 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:02:48.408053    3149 start.go:495] detecting cgroup driver to use...
	I0819 10:02:48.408141    3149 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:02:48.422240    3149 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0819 10:02:48.422854    3149 command_runner.go:130] > [Unit]
	I0819 10:02:48.422864    3149 command_runner.go:130] > Description=Docker Application Container Engine
	I0819 10:02:48.422868    3149 command_runner.go:130] > Documentation=https://docs.docker.com
	I0819 10:02:48.422873    3149 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0819 10:02:48.422878    3149 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0819 10:02:48.422882    3149 command_runner.go:130] > StartLimitBurst=3
	I0819 10:02:48.422886    3149 command_runner.go:130] > StartLimitIntervalSec=60
	I0819 10:02:48.422890    3149 command_runner.go:130] > [Service]
	I0819 10:02:48.422896    3149 command_runner.go:130] > Type=notify
	I0819 10:02:48.422900    3149 command_runner.go:130] > Restart=on-failure
	I0819 10:02:48.422906    3149 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0819 10:02:48.422914    3149 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0819 10:02:48.422920    3149 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0819 10:02:48.422926    3149 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0819 10:02:48.422932    3149 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0819 10:02:48.422942    3149 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0819 10:02:48.422948    3149 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0819 10:02:48.422956    3149 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0819 10:02:48.422962    3149 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0819 10:02:48.422966    3149 command_runner.go:130] > ExecStart=
	I0819 10:02:48.422983    3149 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0819 10:02:48.422987    3149 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0819 10:02:48.422994    3149 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0819 10:02:48.423000    3149 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0819 10:02:48.423003    3149 command_runner.go:130] > LimitNOFILE=infinity
	I0819 10:02:48.423011    3149 command_runner.go:130] > LimitNPROC=infinity
	I0819 10:02:48.423015    3149 command_runner.go:130] > LimitCORE=infinity
	I0819 10:02:48.423019    3149 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0819 10:02:48.423024    3149 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0819 10:02:48.423027    3149 command_runner.go:130] > TasksMax=infinity
	I0819 10:02:48.423030    3149 command_runner.go:130] > TimeoutStartSec=0
	I0819 10:02:48.423035    3149 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0819 10:02:48.423039    3149 command_runner.go:130] > Delegate=yes
	I0819 10:02:48.423043    3149 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0819 10:02:48.423047    3149 command_runner.go:130] > KillMode=process
	I0819 10:02:48.423050    3149 command_runner.go:130] > [Install]
	I0819 10:02:48.423059    3149 command_runner.go:130] > WantedBy=multi-user.target
	I0819 10:02:48.423191    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:02:48.438160    3149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:02:48.458938    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:02:48.471298    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:02:48.481842    3149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:02:48.498207    3149 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0819 10:02:48.498560    3149 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:02:48.501580    3149 command_runner.go:130] > /usr/bin/cri-dockerd
	I0819 10:02:48.501729    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:02:48.508831    3149 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:02:48.522701    3149 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:02:48.665555    3149 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:02:48.815200    3149 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:02:48.815277    3149 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:02:48.832404    3149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:02:48.960435    3149 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:04:00.136198    3149 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0819 10:04:00.136213    3149 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0819 10:04:00.136223    3149 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.17566847s)
	I0819 10:04:00.136284    3149 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:04:00.148256    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.148298    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.179943585Z" level=info msg="Starting up"
	I0819 10:04:00.148306    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.180942482Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.148320    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.181508233Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=529
	I0819 10:04:00.148330    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.197101767Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.148340    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212309114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.148351    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212331640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.148359    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212367467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.148370    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212377477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148381    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212427828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148392    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212459845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148418    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212614080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148438    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212648283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148455    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212660789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148466    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212668790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148479    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212725662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148490    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212870308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148504    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214380176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148513    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214415646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148540    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214516813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148550    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214549580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.148560    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214611309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.148568    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214671792Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.148578    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216534676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.148586    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216610115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.148595    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216626522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.148604    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216638444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.148612    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216647918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.148621    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216733763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.148630    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216945239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.148638    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217040348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.148647    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217073947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.148656    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217084934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.148672    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217096633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148682    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217105205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148691    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217112660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148700    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217121182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148709    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217136065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148720    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217146862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148729    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217154975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148811    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217162140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148823    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217174944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148831    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217184058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148840    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217193346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148849    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217205266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148858    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217214712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148867    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217222710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148876    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217230703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148884    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217238674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148893    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217246762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148902    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217255635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148911    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217263095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148920    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217270770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148928    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217278425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148942    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217287600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.148951    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217301045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148959    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217309187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148968    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217316720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.148977    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217362662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.148989    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217376693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.148999    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217384264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.149127    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217392026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.149138    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217398807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149151    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217406542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.149159    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217413058Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.149168    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217541797Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.149175    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217596199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.149183    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217626417Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.149191    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217704249Z" level=info msg="containerd successfully booted in 0.021235s"
	I0819 10:04:00.149204    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.213638513Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.149212    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.218697243Z" level=info msg="Loading containers: start."
	I0819 10:04:00.149230    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.303833103Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.149242    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.394776557Z" level=info msg="Loading containers: done."
	I0819 10:04:00.149252    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.401999290Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.149259    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.402083612Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.149267    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430356737Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.149273    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.149280    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430518481Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.149286    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.592352095Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.149293    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593517361Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.149302    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593620938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.149310    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593657991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.149320    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.594083691Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0819 10:04:00.149325    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.149331    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.149336    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.149341    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.149347    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.633757457Z" level=info msg="Starting up"
	I0819 10:04:00.149464    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634184054Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.149477    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634821921Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=873
	I0819 10:04:00.149486    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.653253192Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.149496    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670539137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.149505    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670588711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.149514    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670618159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.149523    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670627892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149534    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670647557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149546    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670655607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149561    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670761247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149571    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670822043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149582    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670833696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149592    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670840772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149601    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670856847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149610    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670937210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149624    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672479320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149633    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672517250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149656    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672598536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149665    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672608718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.149674    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672627499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.149682    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672639411Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.149690    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672775631Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.149699    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672821269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.149713    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672833738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.149723    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672843249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.149732    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672853396Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.149740    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672882179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.149753    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673016560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.149761    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673078296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.149771    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673089866Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.149780    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673100402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.149790    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673108857Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149799    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673116983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149808    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673124628Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149817    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673133352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149830    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673141618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149840    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673150296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149848    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673158127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149857    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673165754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149938    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149950    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673407110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149959    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673425300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149968    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673438713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149976    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673449750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149986    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673459416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149994    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673470226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150003    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673482043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150018    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673493250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150027    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673506067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150035    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673516910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150044    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673527469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150053    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673573561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150061    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673591400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.150074    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673631719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150083    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673719578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150092    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150101    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673789779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.150113    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673825158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.150122    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673835448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150133    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673846514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.150146    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673856283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150264    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673868043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.150275    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673875479Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.150284    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674416665Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.150292    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674488718Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.150300    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674551662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.150307    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674591532Z" level=info msg="containerd successfully booted in 0.021887s"
	I0819 10:04:00.150315    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.701018022Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.150322    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.703929003Z" level=info msg="Loading containers: start."
	I0819 10:04:00.150338    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.774231260Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.150349    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.832584697Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0819 10:04:00.150362    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.874250689Z" level=info msg="Loading containers: done."
	I0819 10:04:00.150374    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884709929Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.150382    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884767272Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.150389    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907293087Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.150396    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907348774Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.150402    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.150412    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.150420    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.019481735Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.150429    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020418313Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0819 10:04:00.150437    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020517778Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.150446    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020639216Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.150455    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020676616Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.150461    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.150467    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.150473    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.150480    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.052721036Z" level=info msg="Starting up"
	I0819 10:04:00.150599    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.053665999Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.150613    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.054204471Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1227
	I0819 10:04:00.150627    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.071110001Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.150637    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086417619Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.150645    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086519393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150655    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086575826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.150664    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086609098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150675    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086649285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150684    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086679999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150700    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086800826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150710    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086837952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150721    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086867954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150730    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086894854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150739    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086930771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150748    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.087026239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150763    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088598589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150772    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088650891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150786    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088784035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150795    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088826554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.150805    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088863800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.150813    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088900283Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.150821    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089048412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.150830    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089096938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.150839    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089133463Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.150849    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089178884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.150858    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089213509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.150866    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089263884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.150875    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089475204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.150883    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089597981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.150892    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089639022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.150902    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089670206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.150912    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089699866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150921    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089728982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150930    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089757898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150939    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089787686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150948    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089821007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150958    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089859340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150969    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089892427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150982    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089920146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.151044    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089960280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151058    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089995294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151067    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090025807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151076    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090055021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151085    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090088517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151095    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090119075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151104    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090147596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151113    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151122    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090215944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151130    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090256138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151139    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090288110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151148    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090316417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151156    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090344756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151164    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090386745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.151173    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090425469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151182    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090489354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151191    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090525304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.151200    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090598037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.151215    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090641245Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.151225    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090672551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.151238    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090701383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.151350    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090729639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151361    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090758285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.151380    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090785175Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.151390    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090962205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.151398    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091049960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.151406    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091113179Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.151414    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091149051Z" level=info msg="containerd successfully booted in 0.020375s"
	I0819 10:04:00.151422    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.080403371Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.151429    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.185866595Z" level=info msg="Loading containers: start."
	I0819 10:04:00.151445    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.255656572Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.151456    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.313204760Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0819 10:04:00.151464    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.358744224Z" level=info msg="Loading containers: done."
	I0819 10:04:00.151474    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365948882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.151483    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365999910Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.151496    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384916152Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.151504    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384992962Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.151510    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.151519    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237378813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151531    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237442064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151541    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237454926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151551    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237547247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151563    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240823938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151616    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240944115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151631    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240972248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151641    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.241074980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151653    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251431426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151663    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251590345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151673    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251601329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151683    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251683938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151693    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253924695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151704    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253986191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151714    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253999192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151724    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.254059512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151734    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444251009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151744    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444317593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151754    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444336465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151767    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444427584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151777    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458785591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151787    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458823990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151805    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151815    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458891334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151865    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477642840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151878    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477748278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151887    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477759630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151896    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477819081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151908    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480734366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151918    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480804224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151928    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480826831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151938    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480950777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151948    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561746494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151962    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561814928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151972    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561824738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151982    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561890303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151993    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765174254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152004    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765250994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152013    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765324828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152023    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765477954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152035    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798811898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152045    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798944640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152055    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798957582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152134    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.799103034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152147    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881637043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152158    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881920803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152170    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882025155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152180    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152190    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402231252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152200    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402303190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152214    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402316565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152224    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402385693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152234    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418387475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152244    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418603733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152254    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418627856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152263    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418851110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152273    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907392815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152283    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907863518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152297    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908056887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152307    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152317    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989553144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152327    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989622168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152413    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989632381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152425    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.992038509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152439    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.526555515Z" level=info msg="ignoring event" container=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152449    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527066255Z" level=info msg="shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	I0819 10:04:00.152459    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527315561Z" level=warning msg="cleaning up after shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	I0819 10:04:00.152467    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527360670Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152479    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.607857375Z" level=info msg="ignoring event" container=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152493    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608302054Z" level=info msg="shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	I0819 10:04:00.152503    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608658326Z" level=warning msg="cleaning up after shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	I0819 10:04:00.152514    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608740170Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152521    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.158148283Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.152532    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.152543    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268535097Z" level=info msg="shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	I0819 10:04:00.152555    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.268717864Z" level=info msg="ignoring event" container=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152567    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268586609Z" level=warning msg="cleaning up after shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	I0819 10:04:00.152575    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268964831Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152590    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.273347289Z" level=info msg="ignoring event" container=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152599    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.273955655Z" level=info msg="shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	I0819 10:04:00.152609    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274023465Z" level=warning msg="cleaning up after shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	I0819 10:04:00.152617    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274033869Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152761    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290067625Z" level=info msg="ignoring event" container=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152775    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290112205Z" level=info msg="ignoring event" container=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152785    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290424043Z" level=info msg="shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	I0819 10:04:00.152800    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290536979Z" level=warning msg="cleaning up after shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	I0819 10:04:00.152808    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290582368Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152817    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290465882Z" level=info msg="shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	I0819 10:04:00.152828    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290733155Z" level=warning msg="cleaning up after shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	I0819 10:04:00.152836    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290741439Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152847    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291499508Z" level=info msg="ignoring event" container=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152858    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291535224Z" level=info msg="ignoring event" container=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152866    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290595808Z" level=info msg="shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	I0819 10:04:00.152876    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297716002Z" level=warning msg="cleaning up after shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	I0819 10:04:00.152883    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297725076Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152895    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297983983Z" level=info msg="shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	I0819 10:04:00.152904    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298045597Z" level=warning msg="cleaning up after shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	I0819 10:04:00.152912    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298148865Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152925    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302154900Z" level=info msg="ignoring event" container=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152937    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302226976Z" level=info msg="ignoring event" container=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152946    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302717446Z" level=info msg="shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	I0819 10:04:00.152957    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302759085Z" level=warning msg="cleaning up after shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	I0819 10:04:00.152965    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302767629Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152974    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308068913Z" level=info msg="shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	I0819 10:04:00.152984    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308118671Z" level=warning msg="cleaning up after shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	I0819 10:04:00.152996    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308328329Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153006    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311243798Z" level=info msg="shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	I0819 10:04:00.153016    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311327236Z" level=warning msg="cleaning up after shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	I0819 10:04:00.153024    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311335697Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153042    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316752567Z" level=info msg="ignoring event" container=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153053    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316800043Z" level=info msg="ignoring event" container=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153069    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316819263Z" level=info msg="ignoring event" container=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153079    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317249898Z" level=info msg="shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	I0819 10:04:00.153093    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317359801Z" level=warning msg="cleaning up after shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	I0819 10:04:00.153106    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317369184Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153116    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321910919Z" level=info msg="shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	I0819 10:04:00.153126    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321963437Z" level=warning msg="cleaning up after shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	I0819 10:04:00.153134    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321972279Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153147    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.343145333Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153159    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.343891870Z" level=info msg="ignoring event" container=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153175    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.344047521Z" level=info msg="shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	I0819 10:04:00.153190    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345641889Z" level=warning msg="cleaning up after shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	I0819 10:04:00.153200    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345650213Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153213    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.353197511Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153227    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.354463589Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153243    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.366627155Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153256    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.401735781Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153269    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1220]: time="2024-08-19T17:02:54.221061363Z" level=info msg="ignoring event" container=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153279    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221240161Z" level=info msg="shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	I0819 10:04:00.153290    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221269867Z" level=warning msg="cleaning up after shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	I0819 10:04:00.153297    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221276283Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153312    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.230654326Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd
	I0819 10:04:00.153323    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.274755484Z" level=info msg="ignoring event" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153334    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275154472Z" level=info msg="shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	I0819 10:04:00.153345    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275772857Z" level=warning msg="cleaning up after shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	I0819 10:04:00.153361    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275815643Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153372    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.299808564Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0819 10:04:00.153379    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300197939Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.153414    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300259721Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.153426    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300281777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.153433    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.153439    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.153445    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Consumed 2.502s CPU time.
	I0819 10:04:00.153454    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.153461    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 dockerd[3529]: time="2024-08-19T17:03:00.342173492Z" level=info msg="Starting up"
	I0819 10:04:00.153471    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 dockerd[3529]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0819 10:04:00.153480    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0819 10:04:00.153486    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0819 10:04:00.153492    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	I0819 10:04:00.188229    3149 out.go:201] 
	W0819 10:04:00.209936    3149 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:01:44 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.179943585Z" level=info msg="Starting up"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.180942482Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.181508233Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=529
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.197101767Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212309114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212331640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212367467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212377477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212427828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212459845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212614080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212648283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212660789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212668790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212725662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212870308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214380176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214415646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214516813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214549580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214611309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214671792Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216534676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216610115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216626522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216638444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216647918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216733763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216945239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217040348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217073947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217084934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217096633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217105205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217112660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217121182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217136065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217146862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217154975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217162140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217174944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217184058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217193346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217205266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217214712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217222710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217230703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217238674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217246762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217255635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217263095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217270770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217278425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217287600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217301045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217309187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217316720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217362662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217376693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217384264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217392026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217398807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217406542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217413058Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217541797Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217596199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217626417Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217704249Z" level=info msg="containerd successfully booted in 0.021235s"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.213638513Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.218697243Z" level=info msg="Loading containers: start."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.303833103Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.394776557Z" level=info msg="Loading containers: done."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.401999290Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.402083612Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430356737Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:45 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430518481Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.592352095Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593517361Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593620938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593657991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.594083691Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 19 17:01:46 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:47 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:47 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.633757457Z" level=info msg="Starting up"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634184054Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634821921Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=873
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.653253192Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670539137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670588711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670618159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670627892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670647557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670655607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670761247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670822043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670833696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670840772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670856847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670937210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672479320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672517250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672598536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672608718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672627499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672639411Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672775631Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672821269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672833738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672843249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672853396Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672882179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673016560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673078296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673089866Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673100402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673108857Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673116983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673124628Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673133352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673141618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673150296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673158127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673165754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673407110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673425300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673438713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673449750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673459416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673470226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673482043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673493250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673506067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673516910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673527469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673573561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673591400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673631719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673719578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673789779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673825158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673835448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673846514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673856283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673868043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673875479Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674416665Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674488718Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674551662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674591532Z" level=info msg="containerd successfully booted in 0.021887s"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.701018022Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.703929003Z" level=info msg="Loading containers: start."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.774231260Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.832584697Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.874250689Z" level=info msg="Loading containers: done."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884709929Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884767272Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907293087Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907348774Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:48 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:53 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.019481735Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020418313Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020517778Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020639216Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020676616Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:54 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:54 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:54 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.052721036Z" level=info msg="Starting up"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.053665999Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.054204471Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1227
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.071110001Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086417619Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086519393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086575826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086609098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086649285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086679999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086800826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086837952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086867954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086894854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086930771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.087026239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088598589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088650891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088784035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088826554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088863800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088900283Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089048412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089096938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089133463Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089178884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089213509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089263884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089475204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089597981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089639022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089670206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089699866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089728982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089757898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089787686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089821007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089859340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089892427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089920146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089960280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089995294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090025807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090055021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090088517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090119075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090147596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090215944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090256138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090288110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090316417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090344756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090386745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090425469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090489354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090525304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090598037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090641245Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090672551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090701383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090729639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090758285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090785175Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090962205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091049960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091113179Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091149051Z" level=info msg="containerd successfully booted in 0.020375s"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.080403371Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.185866595Z" level=info msg="Loading containers: start."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.255656572Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.313204760Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.358744224Z" level=info msg="Loading containers: done."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365948882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365999910Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384916152Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384992962Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:55 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237378813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237442064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237454926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237547247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240823938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240944115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240972248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.241074980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251431426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251590345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251601329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251683938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253924695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253986191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253999192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.254059512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444251009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444317593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444336465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444427584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458785591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458823990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458891334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477642840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477748278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477759630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477819081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480734366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480804224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480826831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480950777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561746494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561814928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561824738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561890303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765174254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765250994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765324828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765477954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798811898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798944640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798957582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.799103034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881637043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881920803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882025155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402231252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402303190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402316565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402385693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418387475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418603733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418627856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418851110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907392815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907863518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908056887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989553144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989622168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989632381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.992038509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.526555515Z" level=info msg="ignoring event" container=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527066255Z" level=info msg="shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527315561Z" level=warning msg="cleaning up after shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527360670Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.607857375Z" level=info msg="ignoring event" container=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608302054Z" level=info msg="shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608658326Z" level=warning msg="cleaning up after shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608740170Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.158148283Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:02:49 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268535097Z" level=info msg="shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.268717864Z" level=info msg="ignoring event" container=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268586609Z" level=warning msg="cleaning up after shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268964831Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.273347289Z" level=info msg="ignoring event" container=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.273955655Z" level=info msg="shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274023465Z" level=warning msg="cleaning up after shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274033869Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290067625Z" level=info msg="ignoring event" container=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290112205Z" level=info msg="ignoring event" container=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290424043Z" level=info msg="shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290536979Z" level=warning msg="cleaning up after shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290582368Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290465882Z" level=info msg="shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290733155Z" level=warning msg="cleaning up after shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290741439Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291499508Z" level=info msg="ignoring event" container=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291535224Z" level=info msg="ignoring event" container=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290595808Z" level=info msg="shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297716002Z" level=warning msg="cleaning up after shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297725076Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297983983Z" level=info msg="shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298045597Z" level=warning msg="cleaning up after shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298148865Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302154900Z" level=info msg="ignoring event" container=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302226976Z" level=info msg="ignoring event" container=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302717446Z" level=info msg="shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302759085Z" level=warning msg="cleaning up after shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302767629Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308068913Z" level=info msg="shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308118671Z" level=warning msg="cleaning up after shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308328329Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311243798Z" level=info msg="shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311327236Z" level=warning msg="cleaning up after shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311335697Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316752567Z" level=info msg="ignoring event" container=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316800043Z" level=info msg="ignoring event" container=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316819263Z" level=info msg="ignoring event" container=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317249898Z" level=info msg="shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317359801Z" level=warning msg="cleaning up after shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317369184Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321910919Z" level=info msg="shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321963437Z" level=warning msg="cleaning up after shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321972279Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.343145333Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.343891870Z" level=info msg="ignoring event" container=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.344047521Z" level=info msg="shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345641889Z" level=warning msg="cleaning up after shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345650213Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.353197511Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.354463589Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.366627155Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.401735781Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1220]: time="2024-08-19T17:02:54.221061363Z" level=info msg="ignoring event" container=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221240161Z" level=info msg="shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221269867Z" level=warning msg="cleaning up after shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221276283Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.230654326Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.274755484Z" level=info msg="ignoring event" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275154472Z" level=info msg="shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275772857Z" level=warning msg="cleaning up after shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275815643Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.299808564Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300197939Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300259721Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300281777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:03:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Consumed 2.502s CPU time.
	Aug 19 17:03:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:03:00 functional-622000 dockerd[3529]: time="2024-08-19T17:03:00.342173492Z" level=info msg="Starting up"
	Aug 19 17:04:00 functional-622000 dockerd[3529]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:04:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:01:44 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.179943585Z" level=info msg="Starting up"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.180942482Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.181508233Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=529
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.197101767Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212309114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212331640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212367467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212377477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212427828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212459845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212614080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212648283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212660789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212668790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212725662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212870308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214380176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214415646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214516813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214549580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214611309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214671792Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216534676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216610115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216626522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216638444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216647918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216733763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216945239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217040348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217073947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217084934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217096633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217105205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217112660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217121182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217136065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217146862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217154975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217162140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217174944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217184058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217193346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217205266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217214712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217222710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217230703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217238674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217246762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217255635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217263095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217270770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217278425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217287600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217301045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217309187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217316720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217362662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217376693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217384264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217392026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217398807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217406542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217413058Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217541797Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217596199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217626417Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217704249Z" level=info msg="containerd successfully booted in 0.021235s"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.213638513Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.218697243Z" level=info msg="Loading containers: start."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.303833103Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.394776557Z" level=info msg="Loading containers: done."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.401999290Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.402083612Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430356737Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:45 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430518481Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.592352095Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593517361Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593620938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593657991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.594083691Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 19 17:01:46 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:47 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:47 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.633757457Z" level=info msg="Starting up"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634184054Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634821921Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=873
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.653253192Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670539137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670588711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670618159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670627892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670647557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670655607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670761247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670822043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670833696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670840772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670856847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670937210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672479320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672517250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672598536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672608718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672627499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672639411Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672775631Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672821269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672833738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672843249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672853396Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672882179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673016560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673078296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673089866Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673100402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673108857Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673116983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673124628Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673133352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673141618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673150296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673158127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673165754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673407110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673425300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673438713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673449750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673459416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673470226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673482043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673493250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673506067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673516910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673527469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673573561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673591400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673631719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673719578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673789779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673825158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673835448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673846514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673856283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673868043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673875479Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674416665Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674488718Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674551662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674591532Z" level=info msg="containerd successfully booted in 0.021887s"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.701018022Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.703929003Z" level=info msg="Loading containers: start."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.774231260Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.832584697Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.874250689Z" level=info msg="Loading containers: done."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884709929Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884767272Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907293087Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907348774Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:48 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:53 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.019481735Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020418313Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020517778Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020639216Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020676616Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:54 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:54 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:54 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.052721036Z" level=info msg="Starting up"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.053665999Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.054204471Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1227
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.071110001Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086417619Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086519393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086575826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086609098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086649285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086679999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086800826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086837952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086867954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086894854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086930771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.087026239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088598589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088650891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088784035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088826554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088863800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088900283Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089048412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089096938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089133463Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089178884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089213509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089263884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089475204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089597981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089639022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089670206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089699866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089728982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089757898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089787686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089821007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089859340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089892427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089920146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089960280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089995294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090025807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090055021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090088517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090119075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090147596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090215944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090256138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090288110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090316417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090344756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090386745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090425469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090489354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090525304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090598037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090641245Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090672551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090701383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090729639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090758285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090785175Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090962205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091049960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091113179Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091149051Z" level=info msg="containerd successfully booted in 0.020375s"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.080403371Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.185866595Z" level=info msg="Loading containers: start."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.255656572Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.313204760Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.358744224Z" level=info msg="Loading containers: done."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365948882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365999910Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384916152Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384992962Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:55 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237378813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237442064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237454926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237547247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240823938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240944115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240972248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.241074980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251431426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251590345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251601329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251683938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253924695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253986191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253999192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.254059512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444251009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444317593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444336465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444427584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458785591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458823990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458891334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477642840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477748278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477759630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477819081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480734366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480804224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480826831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480950777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561746494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561814928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561824738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561890303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765174254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765250994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765324828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765477954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798811898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798944640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798957582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.799103034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881637043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881920803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882025155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402231252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402303190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402316565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402385693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418387475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418603733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418627856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418851110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907392815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907863518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908056887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989553144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989622168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989632381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.992038509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.526555515Z" level=info msg="ignoring event" container=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527066255Z" level=info msg="shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527315561Z" level=warning msg="cleaning up after shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527360670Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.607857375Z" level=info msg="ignoring event" container=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608302054Z" level=info msg="shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608658326Z" level=warning msg="cleaning up after shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608740170Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.158148283Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:02:49 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268535097Z" level=info msg="shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.268717864Z" level=info msg="ignoring event" container=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268586609Z" level=warning msg="cleaning up after shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268964831Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.273347289Z" level=info msg="ignoring event" container=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.273955655Z" level=info msg="shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274023465Z" level=warning msg="cleaning up after shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274033869Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290067625Z" level=info msg="ignoring event" container=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290112205Z" level=info msg="ignoring event" container=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290424043Z" level=info msg="shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290536979Z" level=warning msg="cleaning up after shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290582368Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290465882Z" level=info msg="shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290733155Z" level=warning msg="cleaning up after shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290741439Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291499508Z" level=info msg="ignoring event" container=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291535224Z" level=info msg="ignoring event" container=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290595808Z" level=info msg="shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297716002Z" level=warning msg="cleaning up after shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297725076Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297983983Z" level=info msg="shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298045597Z" level=warning msg="cleaning up after shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298148865Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302154900Z" level=info msg="ignoring event" container=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302226976Z" level=info msg="ignoring event" container=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302717446Z" level=info msg="shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302759085Z" level=warning msg="cleaning up after shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302767629Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308068913Z" level=info msg="shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308118671Z" level=warning msg="cleaning up after shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308328329Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311243798Z" level=info msg="shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311327236Z" level=warning msg="cleaning up after shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311335697Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316752567Z" level=info msg="ignoring event" container=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316800043Z" level=info msg="ignoring event" container=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316819263Z" level=info msg="ignoring event" container=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317249898Z" level=info msg="shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317359801Z" level=warning msg="cleaning up after shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317369184Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321910919Z" level=info msg="shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321963437Z" level=warning msg="cleaning up after shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321972279Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.343145333Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.343891870Z" level=info msg="ignoring event" container=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.344047521Z" level=info msg="shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345641889Z" level=warning msg="cleaning up after shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345650213Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.353197511Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.354463589Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.366627155Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.401735781Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1220]: time="2024-08-19T17:02:54.221061363Z" level=info msg="ignoring event" container=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221240161Z" level=info msg="shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221269867Z" level=warning msg="cleaning up after shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221276283Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.230654326Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.274755484Z" level=info msg="ignoring event" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275154472Z" level=info msg="shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275772857Z" level=warning msg="cleaning up after shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275815643Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.299808564Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300197939Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300259721Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300281777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:03:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Consumed 2.502s CPU time.
	Aug 19 17:03:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:03:00 functional-622000 dockerd[3529]: time="2024-08-19T17:03:00.342173492Z" level=info msg="Starting up"
	Aug 19 17:04:00 functional-622000 dockerd[3529]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:04:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:04:00.210429    3149 out.go:270] * 
	* 
	W0819 10:04:00.211654    3149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:04:00.274709    3149 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-amd64 start -p functional-622000 --alsologtostderr -v=8": exit status 90
functional_test.go:663: soft start took 1m13.717891609s for "functional-622000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-622000 -n functional-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-622000 -n functional-622000: exit status 2 (154.793429ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 logs -n 25
E0819 10:05:28.884131    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:05:56.603982    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 logs -n 25: (2m0.411620033s)
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | disable nvidia-device-plugin                                             | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:57 PDT | 19 Aug 24 09:57 PDT |
	|         | -p addons-080000                                                         |                   |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                 | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:57 PDT | 19 Aug 24 09:57 PDT |
	|         | addons-080000                                                            |                   |         |         |                     |                     |
	| addons  | enable headlamp                                                          | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:57 PDT | 19 Aug 24 09:57 PDT |
	|         | -p addons-080000                                                         |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| addons  | addons-080000 addons disable                                             | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | headlamp --alsologtostderr                                               |                   |         |         |                     |                     |
	|         | -v=1                                                                     |                   |         |         |                     |                     |
	| stop    | -p addons-080000                                                         | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	| addons  | enable dashboard -p                                                      | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | addons-080000                                                            |                   |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | addons-080000                                                            |                   |         |         |                     |                     |
	| addons  | disable gvisor -p                                                        | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | addons-080000                                                            |                   |         |         |                     |                     |
	| delete  | -p addons-080000                                                         | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	| start   | -p nospam-492000 -n=1 --memory=2250 --wait=false                         | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | --driver=hyperkit                                                        |                   |         |         |                     |                     |
	| start   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT |                     |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | start --dry-run                                                          |                   |         |         |                     |                     |
	| start   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT |                     |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | start --dry-run                                                          |                   |         |         |                     |                     |
	| start   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT |                     |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | start --dry-run                                                          |                   |         |         |                     |                     |
	| pause   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | pause                                                                    |                   |         |         |                     |                     |
	| pause   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | pause                                                                    |                   |         |         |                     |                     |
	| pause   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | pause                                                                    |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | unpause                                                                  |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | unpause                                                                  |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | unpause                                                                  |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | stop                                                                     |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 10:00 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | stop                                                                     |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 10:00 PDT | 19 Aug 24 10:01 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | stop                                                                     |                   |         |         |                     |                     |
	| delete  | -p nospam-492000                                                         | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 10:01 PDT | 19 Aug 24 10:01 PDT |
	| start   | -p functional-622000                                                     | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:01 PDT | 19 Aug 24 10:02 PDT |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit                                             |                   |         |         |                     |                     |
	| start   | -p functional-622000                                                     | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:02 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:02:46
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:02:46.715279    3149 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:02:46.715467    3149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:02:46.715473    3149 out.go:358] Setting ErrFile to fd 2...
	I0819 10:02:46.715476    3149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:02:46.715649    3149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:02:46.717106    3149 out.go:352] Setting JSON to false
	I0819 10:02:46.739543    3149 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1936,"bootTime":1724085030,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:02:46.739637    3149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:02:46.761631    3149 out.go:177] * [functional-622000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:02:46.804362    3149 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:02:46.804421    3149 notify.go:220] Checking for updates...
	I0819 10:02:46.847125    3149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:02:46.868395    3149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:02:46.889188    3149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:02:46.931247    3149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:02:46.952016    3149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:02:46.974016    3149 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:02:46.974175    3149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:02:46.974828    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:46.974917    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:02:46.984546    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50192
	I0819 10:02:46.984906    3149 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:02:46.985340    3149 main.go:141] libmachine: Using API Version  1
	I0819 10:02:46.985351    3149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:02:46.985609    3149 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:02:46.985745    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.014206    3149 out.go:177] * Using the hyperkit driver based on existing profile
	I0819 10:02:47.056388    3149 start.go:297] selected driver: hyperkit
	I0819 10:02:47.056417    3149 start.go:901] validating driver "hyperkit" against &{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:02:47.056645    3149 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:02:47.056829    3149 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:02:47.057043    3149 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:02:47.066748    3149 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:02:47.070635    3149 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:47.070656    3149 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:02:47.073332    3149 cni.go:84] Creating CNI manager for ""
	I0819 10:02:47.073357    3149 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 10:02:47.073438    3149 start.go:340] cluster config:
	{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:02:47.073535    3149 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:02:47.116046    3149 out.go:177] * Starting "functional-622000" primary control-plane node in "functional-622000" cluster
	I0819 10:02:47.137321    3149 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:02:47.137398    3149 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:02:47.137437    3149 cache.go:56] Caching tarball of preloaded images
	I0819 10:02:47.137630    3149 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:02:47.137652    3149 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:02:47.137794    3149 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/config.json ...
	I0819 10:02:47.138761    3149 start.go:360] acquireMachinesLock for functional-622000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:02:47.138881    3149 start.go:364] duration metric: took 95.93µs to acquireMachinesLock for "functional-622000"
	I0819 10:02:47.138927    3149 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:02:47.138944    3149 fix.go:54] fixHost starting: 
	I0819 10:02:47.139354    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:47.139383    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:02:47.148422    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50194
	I0819 10:02:47.148784    3149 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:02:47.149127    3149 main.go:141] libmachine: Using API Version  1
	I0819 10:02:47.149154    3149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:02:47.149416    3149 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:02:47.149542    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.149650    3149 main.go:141] libmachine: (functional-622000) Calling .GetState
	I0819 10:02:47.149730    3149 main.go:141] libmachine: (functional-622000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:02:47.149822    3149 main.go:141] libmachine: (functional-622000) DBG | hyperkit pid from json: 3102
	I0819 10:02:47.150790    3149 fix.go:112] recreateIfNeeded on functional-622000: state=Running err=<nil>
	W0819 10:02:47.150805    3149 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:02:47.172224    3149 out.go:177] * Updating the running hyperkit "functional-622000" VM ...
	I0819 10:02:47.193060    3149 machine.go:93] provisionDockerMachine start ...
	I0819 10:02:47.193093    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.193438    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.193671    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.193895    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.194183    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.194389    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.194647    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.194938    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.194949    3149 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:02:47.257006    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622000
	
	I0819 10:02:47.257020    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.257151    3149 buildroot.go:166] provisioning hostname "functional-622000"
	I0819 10:02:47.257163    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.257264    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.257362    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.257459    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.257534    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.257627    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.257768    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.257923    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.257933    3149 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-622000 && echo "functional-622000" | sudo tee /etc/hostname
	I0819 10:02:47.330881    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622000
	
	I0819 10:02:47.330901    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.331043    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.331162    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.331251    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.331340    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.331465    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.331608    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.331620    3149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-622000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-622000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-622000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:02:47.392695    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:02:47.392714    3149 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:02:47.392730    3149 buildroot.go:174] setting up certificates
	I0819 10:02:47.392736    3149 provision.go:84] configureAuth start
	I0819 10:02:47.392747    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.392879    3149 main.go:141] libmachine: (functional-622000) Calling .GetIP
	I0819 10:02:47.392977    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.393055    3149 provision.go:143] copyHostCerts
	I0819 10:02:47.393086    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:02:47.393160    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:02:47.393169    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:02:47.393370    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:02:47.393581    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:02:47.393621    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:02:47.393626    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:02:47.393737    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:02:47.393914    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:02:47.393957    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:02:47.393962    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:02:47.394039    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:02:47.394180    3149 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.functional-622000 san=[127.0.0.1 192.169.0.4 functional-622000 localhost minikube]
	I0819 10:02:47.551861    3149 provision.go:177] copyRemoteCerts
	I0819 10:02:47.551924    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:02:47.551939    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.552077    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.552163    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.552249    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.552354    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:47.590340    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:02:47.590426    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:02:47.611171    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:02:47.611243    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 10:02:47.631670    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:02:47.631735    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:02:47.651195    3149 provision.go:87] duration metric: took 258.447258ms to configureAuth
	I0819 10:02:47.651207    3149 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:02:47.651340    3149 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:02:47.651354    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.651503    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.651612    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.651695    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.651787    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.651883    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.652007    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.652132    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.652140    3149 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:02:47.713196    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:02:47.713207    3149 buildroot.go:70] root file system type: tmpfs
	I0819 10:02:47.713274    3149 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:02:47.713289    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.713416    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.713502    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.713589    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.713668    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.713818    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.713957    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.714002    3149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:02:47.788841    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:02:47.788868    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.789014    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.789110    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.789218    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.789323    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.789459    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.789600    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.789615    3149 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:02:47.859208    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:02:47.859221    3149 machine.go:96] duration metric: took 666.140503ms to provisionDockerMachine
	I0819 10:02:47.859235    3149 start.go:293] postStartSetup for "functional-622000" (driver="hyperkit")
	I0819 10:02:47.859243    3149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:02:47.859253    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.859433    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:02:47.859447    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.859550    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.859628    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.859723    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.859805    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:47.897960    3149 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:02:47.900903    3149 command_runner.go:130] > NAME=Buildroot
	I0819 10:02:47.900911    3149 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 10:02:47.900915    3149 command_runner.go:130] > ID=buildroot
	I0819 10:02:47.900919    3149 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 10:02:47.900923    3149 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 10:02:47.901013    3149 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:02:47.901024    3149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:02:47.901125    3149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:02:47.901317    3149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:02:47.901324    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:02:47.901516    3149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts -> hosts in /etc/test/nested/copy/2174
	I0819 10:02:47.901521    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts -> /etc/test/nested/copy/2174/hosts
	I0819 10:02:47.901573    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2174
	I0819 10:02:47.908902    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:02:47.928770    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts --> /etc/test/nested/copy/2174/hosts (40 bytes)
	I0819 10:02:47.949590    3149 start.go:296] duration metric: took 90.345683ms for postStartSetup
	I0819 10:02:47.949608    3149 fix.go:56] duration metric: took 810.670757ms for fixHost
	I0819 10:02:47.949626    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.949765    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.949853    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.949932    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.950014    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.950145    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.950278    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.950285    3149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:02:48.015962    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724086968.201080300
	
	I0819 10:02:48.015973    3149 fix.go:216] guest clock: 1724086968.201080300
	I0819 10:02:48.015979    3149 fix.go:229] Guest: 2024-08-19 10:02:48.2010803 -0700 PDT Remote: 2024-08-19 10:02:47.949616 -0700 PDT m=+1.269337789 (delta=251.4643ms)
	I0819 10:02:48.015999    3149 fix.go:200] guest clock delta is within tolerance: 251.4643ms
	I0819 10:02:48.016003    3149 start.go:83] releasing machines lock for "functional-622000", held for 877.108871ms
	I0819 10:02:48.016022    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016177    3149 main.go:141] libmachine: (functional-622000) Calling .GetIP
	I0819 10:02:48.016275    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016589    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016695    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016767    3149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:02:48.016795    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:48.016806    3149 ssh_runner.go:195] Run: cat /version.json
	I0819 10:02:48.016817    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:48.016882    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:48.016971    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:48.016990    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:48.017080    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:48.017101    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:48.017164    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:48.017193    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:48.017328    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:48.049603    3149 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 10:02:48.049829    3149 ssh_runner.go:195] Run: systemctl --version
	I0819 10:02:48.095984    3149 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 10:02:48.096931    3149 command_runner.go:130] > systemd 252 (252)
	I0819 10:02:48.096961    3149 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 10:02:48.097053    3149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:02:48.102122    3149 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 10:02:48.102143    3149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:02:48.102177    3149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:02:48.110952    3149 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 10:02:48.110963    3149 start.go:495] detecting cgroup driver to use...
	I0819 10:02:48.111059    3149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:02:48.126457    3149 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0819 10:02:48.126734    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:02:48.135958    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:02:48.145231    3149 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:02:48.145276    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:02:48.154341    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:02:48.163160    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:02:48.171882    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:02:48.181115    3149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:02:48.190524    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:02:48.200851    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:02:48.209942    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:02:48.219031    3149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:02:48.227175    3149 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 10:02:48.227346    3149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:02:48.235625    3149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:02:48.388843    3149 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:02:48.408053    3149 start.go:495] detecting cgroup driver to use...
	I0819 10:02:48.408141    3149 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:02:48.422240    3149 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0819 10:02:48.422854    3149 command_runner.go:130] > [Unit]
	I0819 10:02:48.422864    3149 command_runner.go:130] > Description=Docker Application Container Engine
	I0819 10:02:48.422868    3149 command_runner.go:130] > Documentation=https://docs.docker.com
	I0819 10:02:48.422873    3149 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0819 10:02:48.422878    3149 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0819 10:02:48.422882    3149 command_runner.go:130] > StartLimitBurst=3
	I0819 10:02:48.422886    3149 command_runner.go:130] > StartLimitIntervalSec=60
	I0819 10:02:48.422890    3149 command_runner.go:130] > [Service]
	I0819 10:02:48.422896    3149 command_runner.go:130] > Type=notify
	I0819 10:02:48.422900    3149 command_runner.go:130] > Restart=on-failure
	I0819 10:02:48.422906    3149 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0819 10:02:48.422914    3149 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0819 10:02:48.422920    3149 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0819 10:02:48.422926    3149 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0819 10:02:48.422932    3149 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0819 10:02:48.422942    3149 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0819 10:02:48.422948    3149 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0819 10:02:48.422956    3149 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0819 10:02:48.422962    3149 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0819 10:02:48.422966    3149 command_runner.go:130] > ExecStart=
	I0819 10:02:48.422983    3149 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0819 10:02:48.422987    3149 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0819 10:02:48.422994    3149 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0819 10:02:48.423000    3149 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0819 10:02:48.423003    3149 command_runner.go:130] > LimitNOFILE=infinity
	I0819 10:02:48.423011    3149 command_runner.go:130] > LimitNPROC=infinity
	I0819 10:02:48.423015    3149 command_runner.go:130] > LimitCORE=infinity
	I0819 10:02:48.423019    3149 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0819 10:02:48.423024    3149 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0819 10:02:48.423027    3149 command_runner.go:130] > TasksMax=infinity
	I0819 10:02:48.423030    3149 command_runner.go:130] > TimeoutStartSec=0
	I0819 10:02:48.423035    3149 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0819 10:02:48.423039    3149 command_runner.go:130] > Delegate=yes
	I0819 10:02:48.423043    3149 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0819 10:02:48.423047    3149 command_runner.go:130] > KillMode=process
	I0819 10:02:48.423050    3149 command_runner.go:130] > [Install]
	I0819 10:02:48.423059    3149 command_runner.go:130] > WantedBy=multi-user.target
	I0819 10:02:48.423191    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:02:48.438160    3149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:02:48.458938    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:02:48.471298    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:02:48.481842    3149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:02:48.498207    3149 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0819 10:02:48.498560    3149 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:02:48.501580    3149 command_runner.go:130] > /usr/bin/cri-dockerd
	I0819 10:02:48.501729    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:02:48.508831    3149 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:02:48.522701    3149 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:02:48.665555    3149 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:02:48.815200    3149 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:02:48.815277    3149 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:02:48.832404    3149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:02:48.960435    3149 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:04:00.136198    3149 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0819 10:04:00.136213    3149 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0819 10:04:00.136223    3149 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.17566847s)
	I0819 10:04:00.136284    3149 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:04:00.148256    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.148298    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.179943585Z" level=info msg="Starting up"
	I0819 10:04:00.148306    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.180942482Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.148320    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.181508233Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=529
	I0819 10:04:00.148330    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.197101767Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.148340    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212309114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.148351    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212331640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.148359    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212367467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.148370    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212377477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148381    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212427828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148392    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212459845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148418    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212614080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148438    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212648283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148455    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212660789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148466    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212668790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148479    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212725662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148490    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212870308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148504    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214380176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148513    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214415646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148540    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214516813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148550    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214549580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.148560    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214611309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.148568    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214671792Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.148578    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216534676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.148586    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216610115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.148595    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216626522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.148604    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216638444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.148612    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216647918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.148621    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216733763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.148630    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216945239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.148638    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217040348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.148647    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217073947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.148656    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217084934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.148672    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217096633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148682    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217105205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148691    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217112660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148700    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217121182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148709    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217136065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148720    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217146862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148729    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217154975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148811    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217162140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148823    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217174944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148831    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217184058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148840    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217193346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148849    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217205266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148858    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217214712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148867    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217222710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148876    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217230703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148884    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217238674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148893    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217246762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148902    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217255635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148911    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217263095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148920    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217270770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148928    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217278425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148942    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217287600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.148951    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217301045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148959    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217309187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148968    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217316720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.148977    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217362662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.148989    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217376693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.148999    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217384264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.149127    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217392026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.149138    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217398807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149151    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217406542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.149159    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217413058Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.149168    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217541797Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.149175    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217596199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.149183    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217626417Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.149191    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217704249Z" level=info msg="containerd successfully booted in 0.021235s"
	I0819 10:04:00.149204    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.213638513Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.149212    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.218697243Z" level=info msg="Loading containers: start."
	I0819 10:04:00.149230    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.303833103Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.149242    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.394776557Z" level=info msg="Loading containers: done."
	I0819 10:04:00.149252    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.401999290Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.149259    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.402083612Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.149267    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430356737Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.149273    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.149280    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430518481Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.149286    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.592352095Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.149293    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593517361Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.149302    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593620938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.149310    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593657991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.149320    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.594083691Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0819 10:04:00.149325    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.149331    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.149336    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.149341    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.149347    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.633757457Z" level=info msg="Starting up"
	I0819 10:04:00.149464    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634184054Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.149477    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634821921Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=873
	I0819 10:04:00.149486    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.653253192Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.149496    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670539137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.149505    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670588711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.149514    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670618159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.149523    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670627892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149534    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670647557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149546    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670655607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149561    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670761247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149571    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670822043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149582    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670833696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149592    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670840772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149601    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670856847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149610    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670937210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149624    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672479320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149633    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672517250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149656    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672598536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149665    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672608718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.149674    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672627499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.149682    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672639411Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.149690    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672775631Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.149699    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672821269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.149713    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672833738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.149723    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672843249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.149732    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672853396Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.149740    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672882179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.149753    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673016560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.149761    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673078296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.149771    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673089866Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.149780    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673100402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.149790    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673108857Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149799    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673116983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149808    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673124628Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149817    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673133352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149830    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673141618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149840    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673150296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149848    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673158127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149857    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673165754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149938    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149950    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673407110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149959    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673425300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149968    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673438713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149976    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673449750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149986    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673459416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149994    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673470226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150003    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673482043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150018    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673493250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150027    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673506067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150035    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673516910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150044    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673527469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150053    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673573561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150061    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673591400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.150074    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673631719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150083    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673719578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150092    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150101    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673789779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.150113    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673825158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.150122    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673835448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150133    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673846514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.150146    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673856283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150264    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673868043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.150275    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673875479Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.150284    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674416665Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.150292    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674488718Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.150300    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674551662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.150307    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674591532Z" level=info msg="containerd successfully booted in 0.021887s"
	I0819 10:04:00.150315    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.701018022Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.150322    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.703929003Z" level=info msg="Loading containers: start."
	I0819 10:04:00.150338    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.774231260Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.150349    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.832584697Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0819 10:04:00.150362    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.874250689Z" level=info msg="Loading containers: done."
	I0819 10:04:00.150374    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884709929Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.150382    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884767272Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.150389    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907293087Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.150396    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907348774Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.150402    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.150412    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.150420    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.019481735Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.150429    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020418313Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0819 10:04:00.150437    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020517778Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.150446    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020639216Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.150455    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020676616Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.150461    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.150467    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.150473    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.150480    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.052721036Z" level=info msg="Starting up"
	I0819 10:04:00.150599    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.053665999Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.150613    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.054204471Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1227
	I0819 10:04:00.150627    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.071110001Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.150637    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086417619Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.150645    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086519393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150655    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086575826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.150664    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086609098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150675    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086649285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150684    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086679999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150700    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086800826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150710    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086837952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150721    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086867954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150730    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086894854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150739    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086930771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150748    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.087026239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150763    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088598589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150772    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088650891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150786    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088784035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150795    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088826554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.150805    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088863800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.150813    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088900283Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.150821    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089048412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.150830    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089096938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.150839    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089133463Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.150849    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089178884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.150858    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089213509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.150866    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089263884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.150875    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089475204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.150883    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089597981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.150892    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089639022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.150902    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089670206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.150912    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089699866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150921    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089728982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150930    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089757898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150939    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089787686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150948    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089821007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150958    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089859340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150969    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089892427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150982    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089920146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.151044    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089960280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151058    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089995294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151067    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090025807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151076    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090055021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151085    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090088517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151095    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090119075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151104    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090147596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151113    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151122    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090215944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151130    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090256138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151139    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090288110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151148    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090316417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151156    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090344756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151164    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090386745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.151173    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090425469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151182    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090489354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151191    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090525304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.151200    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090598037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.151215    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090641245Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.151225    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090672551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.151238    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090701383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.151350    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090729639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151361    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090758285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.151380    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090785175Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.151390    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090962205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.151398    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091049960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.151406    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091113179Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.151414    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091149051Z" level=info msg="containerd successfully booted in 0.020375s"
	I0819 10:04:00.151422    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.080403371Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.151429    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.185866595Z" level=info msg="Loading containers: start."
	I0819 10:04:00.151445    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.255656572Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.151456    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.313204760Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0819 10:04:00.151464    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.358744224Z" level=info msg="Loading containers: done."
	I0819 10:04:00.151474    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365948882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.151483    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365999910Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.151496    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384916152Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.151504    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384992962Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.151510    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.151519    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237378813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151531    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237442064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151541    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237454926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151551    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237547247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151563    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240823938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151616    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240944115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151631    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240972248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151641    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.241074980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151653    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251431426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151663    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251590345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151673    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251601329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151683    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251683938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151693    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253924695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151704    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253986191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151714    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253999192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151724    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.254059512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151734    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444251009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151744    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444317593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151754    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444336465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151767    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444427584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151777    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458785591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151787    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458823990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151805    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151815    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458891334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151865    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477642840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151878    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477748278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151887    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477759630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151896    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477819081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151908    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480734366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151918    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480804224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151928    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480826831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151938    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480950777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151948    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561746494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151962    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561814928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151972    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561824738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151982    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561890303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151993    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765174254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152004    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765250994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152013    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765324828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152023    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765477954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152035    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798811898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152045    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798944640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152055    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798957582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152134    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.799103034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152147    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881637043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152158    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881920803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152170    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882025155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152180    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152190    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402231252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152200    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402303190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152214    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402316565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152224    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402385693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152234    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418387475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152244    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418603733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152254    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418627856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152263    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418851110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152273    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907392815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152283    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907863518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152297    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908056887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152307    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152317    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989553144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152327    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989622168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152413    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989632381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152425    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.992038509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152439    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.526555515Z" level=info msg="ignoring event" container=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152449    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527066255Z" level=info msg="shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	I0819 10:04:00.152459    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527315561Z" level=warning msg="cleaning up after shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	I0819 10:04:00.152467    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527360670Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152479    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.607857375Z" level=info msg="ignoring event" container=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152493    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608302054Z" level=info msg="shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	I0819 10:04:00.152503    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608658326Z" level=warning msg="cleaning up after shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	I0819 10:04:00.152514    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608740170Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152521    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.158148283Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.152532    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.152543    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268535097Z" level=info msg="shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	I0819 10:04:00.152555    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.268717864Z" level=info msg="ignoring event" container=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152567    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268586609Z" level=warning msg="cleaning up after shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	I0819 10:04:00.152575    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268964831Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152590    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.273347289Z" level=info msg="ignoring event" container=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152599    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.273955655Z" level=info msg="shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	I0819 10:04:00.152609    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274023465Z" level=warning msg="cleaning up after shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	I0819 10:04:00.152617    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274033869Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152761    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290067625Z" level=info msg="ignoring event" container=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152775    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290112205Z" level=info msg="ignoring event" container=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152785    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290424043Z" level=info msg="shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	I0819 10:04:00.152800    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290536979Z" level=warning msg="cleaning up after shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	I0819 10:04:00.152808    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290582368Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152817    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290465882Z" level=info msg="shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	I0819 10:04:00.152828    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290733155Z" level=warning msg="cleaning up after shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	I0819 10:04:00.152836    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290741439Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152847    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291499508Z" level=info msg="ignoring event" container=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152858    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291535224Z" level=info msg="ignoring event" container=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152866    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290595808Z" level=info msg="shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	I0819 10:04:00.152876    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297716002Z" level=warning msg="cleaning up after shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	I0819 10:04:00.152883    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297725076Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152895    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297983983Z" level=info msg="shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	I0819 10:04:00.152904    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298045597Z" level=warning msg="cleaning up after shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	I0819 10:04:00.152912    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298148865Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152925    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302154900Z" level=info msg="ignoring event" container=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152937    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302226976Z" level=info msg="ignoring event" container=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152946    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302717446Z" level=info msg="shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	I0819 10:04:00.152957    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302759085Z" level=warning msg="cleaning up after shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	I0819 10:04:00.152965    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302767629Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152974    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308068913Z" level=info msg="shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	I0819 10:04:00.152984    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308118671Z" level=warning msg="cleaning up after shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	I0819 10:04:00.152996    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308328329Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153006    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311243798Z" level=info msg="shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	I0819 10:04:00.153016    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311327236Z" level=warning msg="cleaning up after shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	I0819 10:04:00.153024    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311335697Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153042    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316752567Z" level=info msg="ignoring event" container=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153053    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316800043Z" level=info msg="ignoring event" container=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153069    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316819263Z" level=info msg="ignoring event" container=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153079    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317249898Z" level=info msg="shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	I0819 10:04:00.153093    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317359801Z" level=warning msg="cleaning up after shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	I0819 10:04:00.153106    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317369184Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153116    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321910919Z" level=info msg="shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	I0819 10:04:00.153126    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321963437Z" level=warning msg="cleaning up after shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	I0819 10:04:00.153134    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321972279Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153147    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.343145333Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153159    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.343891870Z" level=info msg="ignoring event" container=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153175    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.344047521Z" level=info msg="shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	I0819 10:04:00.153190    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345641889Z" level=warning msg="cleaning up after shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	I0819 10:04:00.153200    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345650213Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153213    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.353197511Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153227    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.354463589Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153243    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.366627155Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153256    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.401735781Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153269    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1220]: time="2024-08-19T17:02:54.221061363Z" level=info msg="ignoring event" container=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153279    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221240161Z" level=info msg="shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	I0819 10:04:00.153290    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221269867Z" level=warning msg="cleaning up after shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	I0819 10:04:00.153297    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221276283Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153312    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.230654326Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd
	I0819 10:04:00.153323    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.274755484Z" level=info msg="ignoring event" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153334    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275154472Z" level=info msg="shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	I0819 10:04:00.153345    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275772857Z" level=warning msg="cleaning up after shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	I0819 10:04:00.153361    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275815643Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153372    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.299808564Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0819 10:04:00.153379    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300197939Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.153414    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300259721Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.153426    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300281777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.153433    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.153439    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.153445    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Consumed 2.502s CPU time.
	I0819 10:04:00.153454    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.153461    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 dockerd[3529]: time="2024-08-19T17:03:00.342173492Z" level=info msg="Starting up"
	I0819 10:04:00.153471    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 dockerd[3529]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0819 10:04:00.153480    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0819 10:04:00.153486    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0819 10:04:00.153492    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	I0819 10:04:00.188229    3149 out.go:201] 
	W0819 10:04:00.209936    3149 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:01:44 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.179943585Z" level=info msg="Starting up"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.180942482Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.181508233Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=529
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.197101767Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212309114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212331640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212367467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212377477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212427828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212459845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212614080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212648283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212660789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212668790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212725662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212870308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214380176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214415646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214516813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214549580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214611309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214671792Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216534676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216610115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216626522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216638444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216647918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216733763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216945239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217040348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217073947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217084934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217096633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217105205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217112660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217121182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217136065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217146862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217154975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217162140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217174944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217184058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217193346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217205266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217214712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217222710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217230703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217238674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217246762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217255635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217263095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217270770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217278425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217287600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217301045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217309187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217316720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217362662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217376693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217384264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217392026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217398807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217406542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217413058Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217541797Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217596199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217626417Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217704249Z" level=info msg="containerd successfully booted in 0.021235s"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.213638513Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.218697243Z" level=info msg="Loading containers: start."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.303833103Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.394776557Z" level=info msg="Loading containers: done."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.401999290Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.402083612Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430356737Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:45 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430518481Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.592352095Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593517361Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593620938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593657991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.594083691Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 19 17:01:46 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:47 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:47 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.633757457Z" level=info msg="Starting up"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634184054Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634821921Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=873
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.653253192Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670539137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670588711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670618159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670627892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670647557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670655607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670761247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670822043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670833696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670840772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670856847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670937210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672479320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672517250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672598536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672608718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672627499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672639411Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672775631Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672821269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672833738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672843249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672853396Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672882179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673016560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673078296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673089866Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673100402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673108857Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673116983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673124628Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673133352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673141618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673150296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673158127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673165754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673407110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673425300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673438713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673449750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673459416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673470226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673482043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673493250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673506067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673516910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673527469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673573561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673591400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673631719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673719578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673789779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673825158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673835448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673846514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673856283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673868043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673875479Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674416665Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674488718Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674551662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674591532Z" level=info msg="containerd successfully booted in 0.021887s"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.701018022Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.703929003Z" level=info msg="Loading containers: start."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.774231260Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.832584697Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.874250689Z" level=info msg="Loading containers: done."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884709929Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884767272Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907293087Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907348774Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:48 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:53 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.019481735Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020418313Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020517778Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020639216Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020676616Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:54 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:54 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:54 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.052721036Z" level=info msg="Starting up"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.053665999Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.054204471Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1227
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.071110001Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086417619Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086519393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086575826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086609098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086649285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086679999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086800826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086837952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086867954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086894854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086930771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.087026239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088598589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088650891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088784035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088826554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088863800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088900283Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089048412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089096938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089133463Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089178884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089213509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089263884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089475204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089597981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089639022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089670206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089699866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089728982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089757898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089787686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089821007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089859340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089892427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089920146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089960280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089995294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090025807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090055021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090088517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090119075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090147596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090215944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090256138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090288110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090316417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090344756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090386745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090425469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090489354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090525304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090598037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090641245Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090672551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090701383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090729639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090758285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090785175Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090962205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091049960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091113179Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091149051Z" level=info msg="containerd successfully booted in 0.020375s"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.080403371Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.185866595Z" level=info msg="Loading containers: start."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.255656572Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.313204760Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.358744224Z" level=info msg="Loading containers: done."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365948882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365999910Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384916152Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384992962Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:55 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237378813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237442064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237454926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237547247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240823938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240944115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240972248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.241074980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251431426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251590345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251601329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251683938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253924695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253986191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253999192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.254059512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444251009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444317593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444336465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444427584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458785591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458823990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458891334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477642840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477748278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477759630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477819081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480734366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480804224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480826831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480950777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561746494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561814928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561824738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561890303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765174254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765250994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765324828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765477954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798811898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798944640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798957582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.799103034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881637043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881920803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882025155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402231252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402303190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402316565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402385693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418387475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418603733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418627856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418851110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907392815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907863518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908056887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989553144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989622168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989632381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.992038509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.526555515Z" level=info msg="ignoring event" container=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527066255Z" level=info msg="shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527315561Z" level=warning msg="cleaning up after shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527360670Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.607857375Z" level=info msg="ignoring event" container=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608302054Z" level=info msg="shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608658326Z" level=warning msg="cleaning up after shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608740170Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.158148283Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:02:49 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268535097Z" level=info msg="shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.268717864Z" level=info msg="ignoring event" container=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268586609Z" level=warning msg="cleaning up after shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268964831Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.273347289Z" level=info msg="ignoring event" container=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.273955655Z" level=info msg="shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274023465Z" level=warning msg="cleaning up after shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274033869Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290067625Z" level=info msg="ignoring event" container=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290112205Z" level=info msg="ignoring event" container=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290424043Z" level=info msg="shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290536979Z" level=warning msg="cleaning up after shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290582368Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290465882Z" level=info msg="shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290733155Z" level=warning msg="cleaning up after shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290741439Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291499508Z" level=info msg="ignoring event" container=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291535224Z" level=info msg="ignoring event" container=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290595808Z" level=info msg="shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297716002Z" level=warning msg="cleaning up after shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297725076Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297983983Z" level=info msg="shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298045597Z" level=warning msg="cleaning up after shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298148865Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302154900Z" level=info msg="ignoring event" container=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302226976Z" level=info msg="ignoring event" container=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302717446Z" level=info msg="shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302759085Z" level=warning msg="cleaning up after shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302767629Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308068913Z" level=info msg="shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308118671Z" level=warning msg="cleaning up after shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308328329Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311243798Z" level=info msg="shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311327236Z" level=warning msg="cleaning up after shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311335697Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316752567Z" level=info msg="ignoring event" container=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316800043Z" level=info msg="ignoring event" container=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316819263Z" level=info msg="ignoring event" container=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317249898Z" level=info msg="shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317359801Z" level=warning msg="cleaning up after shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317369184Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321910919Z" level=info msg="shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321963437Z" level=warning msg="cleaning up after shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321972279Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.343145333Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.343891870Z" level=info msg="ignoring event" container=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.344047521Z" level=info msg="shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345641889Z" level=warning msg="cleaning up after shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345650213Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.353197511Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.354463589Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.366627155Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.401735781Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1220]: time="2024-08-19T17:02:54.221061363Z" level=info msg="ignoring event" container=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221240161Z" level=info msg="shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221269867Z" level=warning msg="cleaning up after shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221276283Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.230654326Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.274755484Z" level=info msg="ignoring event" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275154472Z" level=info msg="shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275772857Z" level=warning msg="cleaning up after shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275815643Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.299808564Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300197939Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300259721Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300281777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:03:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Consumed 2.502s CPU time.
	Aug 19 17:03:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:03:00 functional-622000 dockerd[3529]: time="2024-08-19T17:03:00.342173492Z" level=info msg="Starting up"
	Aug 19 17:04:00 functional-622000 dockerd[3529]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:04:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:04:00.210429    3149 out.go:270] * 
	W0819 10:04:00.211654    3149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:04:00.274709    3149 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:04:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:04:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:04:00 functional-622000 dockerd[3731]: time="2024-08-19T17:04:00.538775657Z" level=info msg="Starting up"
	Aug 19 17:05:00 functional-622000 dockerd[3731]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:05:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:05:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:05:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="error getting RW layer size for container ID 'd997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824'"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="error getting RW layer size for container ID 'be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa'"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="error getting RW layer size for container ID 'c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90'"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="error getting RW layer size for container ID '5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547'"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="error getting RW layer size for container ID '6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5'"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="error getting RW layer size for container ID 'ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd'"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="error getting RW layer size for container ID '9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5'"
	Aug 19 17:05:00 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:05:00Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Aug 19 17:05:00 functional-622000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Aug 19 17:05:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:05:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-19T17:05:02Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.100608] systemd-fstab-generator[514]: Ignoring "noauto" option for root device
	[  +1.943533] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +0.277412] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.100828] systemd-fstab-generator[844]: Ignoring "noauto" option for root device
	[  +0.052131] kauditd_printk_skb: 117 callbacks suppressed
	[  +0.061352] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +2.454350] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +0.095628] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.097890] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +0.135254] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +3.642141] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.053482] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.547324] systemd-fstab-generator[1462]: Ignoring "noauto" option for root device
	[  +3.456953] systemd-fstab-generator[1592]: Ignoring "noauto" option for root device
	[  +0.049385] kauditd_printk_skb: 70 callbacks suppressed
	[Aug19 17:02] systemd-fstab-generator[1997]: Ignoring "noauto" option for root device
	[  +0.071304] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.815922] systemd-fstab-generator[2131]: Ignoring "noauto" option for root device
	[  +0.113741] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.001342] kauditd_printk_skb: 98 callbacks suppressed
	[ +26.946888] systemd-fstab-generator[3048]: Ignoring "noauto" option for root device
	[  +0.280843] systemd-fstab-generator[3084]: Ignoring "noauto" option for root device
	[  +0.156587] systemd-fstab-generator[3096]: Ignoring "noauto" option for root device
	[  +0.148300] systemd-fstab-generator[3110]: Ignoring "noauto" option for root device
	[  +5.168584] kauditd_printk_skb: 91 callbacks suppressed
	
	
	==> kernel <==
	 17:06:01 up 4 min,  0 users,  load average: 0.03, 0.13, 0.07
	Linux functional-622000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.023346    2004 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused" interval="7s"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.733627    2004 kubelet.go:2911] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.733929    2004 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: I0819 17:06:00.734340    2004 setters.go:600] "Node became not ready" node="functional-622000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-08-19T17:06:00Z","lastTransitionTime":"2024-08-19T17:06:00Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 3m12.241563362s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"}
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.735746    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:06:00Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:06:00Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:06:00Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2024-08-19T17:06:00Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 3m12.241563362s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to g
et docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-622000\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000/status?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.736035    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.736317    2004 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.736334    2004 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.736346    2004 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.736358    2004 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: I0819 17:06:00.736366    2004 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.736491    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.736756    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.737030    2004 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.737078    2004 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.737119    2004 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.737151    2004 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.737189    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.737228    2004 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.737538    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.737599    2004 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.737737    2004 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.738129    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.738914    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:06:00 functional-622000 kubelet[2004]: E0819 17:06:00.738979    2004 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 10:05:00.296191    3196 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:05:00.310064    3196 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:05:00.322421    3196 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:05:00.333946    3196 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:05:00.344416    3196 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:05:00.354651    3196 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:05:00.365018    3196 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:05:00.377193    3196 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-622000 -n functional-622000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-622000 -n functional-622000: exit status 2 (153.216103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-622000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (194.49s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (120.34s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-622000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-622000 get po -A: exit status 1 (549.02867ms)

                                                
                                                
** stderr ** 
	E0819 10:06:01.328377    3260 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0819 10:06:01.430177    3260 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0819 10:06:01.531141    3260 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0819 10:06:01.632624    3260 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	E0819 10:06:01.733275    3260 memcache.go:265] couldn't get current server API group list: Get "https://192.169.0.4:8441/api?timeout=32s": dial tcp 192.169.0.4:8441: connect: connection refused
	The connection to the server 192.169.0.4:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-622000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"E0819 10:06:01.328377    3260 memcache.go:265] couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused\nE0819 10:06:01.430177    3260 memcache.go:265] couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused\nE0819 10:06:01.531141    3260 memcache.go:265] couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused\nE0819 10:06:01.632624    3260 memcache.go:265] couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused\nE0819 10:06:01.733275    3260 memcache.go:265] couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: co
nnect: connection refused\nThe connection to the server 192.169.0.4:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-622000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-622000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-622000 -n functional-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-622000 -n functional-622000: exit status 2 (153.26842ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 logs -n 25: (1m59.440579953s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | disable nvidia-device-plugin                                             | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:57 PDT | 19 Aug 24 09:57 PDT |
	|         | -p addons-080000                                                         |                   |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                 | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:57 PDT | 19 Aug 24 09:57 PDT |
	|         | addons-080000                                                            |                   |         |         |                     |                     |
	| addons  | enable headlamp                                                          | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:57 PDT | 19 Aug 24 09:57 PDT |
	|         | -p addons-080000                                                         |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| addons  | addons-080000 addons disable                                             | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | headlamp --alsologtostderr                                               |                   |         |         |                     |                     |
	|         | -v=1                                                                     |                   |         |         |                     |                     |
	| stop    | -p addons-080000                                                         | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	| addons  | enable dashboard -p                                                      | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | addons-080000                                                            |                   |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | addons-080000                                                            |                   |         |         |                     |                     |
	| addons  | disable gvisor -p                                                        | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | addons-080000                                                            |                   |         |         |                     |                     |
	| delete  | -p addons-080000                                                         | addons-080000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	| start   | -p nospam-492000 -n=1 --memory=2250 --wait=false                         | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | --driver=hyperkit                                                        |                   |         |         |                     |                     |
	| start   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT |                     |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | start --dry-run                                                          |                   |         |         |                     |                     |
	| start   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT |                     |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | start --dry-run                                                          |                   |         |         |                     |                     |
	| start   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT |                     |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | start --dry-run                                                          |                   |         |         |                     |                     |
	| pause   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | pause                                                                    |                   |         |         |                     |                     |
	| pause   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | pause                                                                    |                   |         |         |                     |                     |
	| pause   | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | pause                                                                    |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | unpause                                                                  |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | unpause                                                                  |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | unpause                                                                  |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | stop                                                                     |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 10:00 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | stop                                                                     |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                                  | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 10:00 PDT | 19 Aug 24 10:01 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000           |                   |         |         |                     |                     |
	|         | stop                                                                     |                   |         |         |                     |                     |
	| delete  | -p nospam-492000                                                         | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 10:01 PDT | 19 Aug 24 10:01 PDT |
	| start   | -p functional-622000                                                     | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:01 PDT | 19 Aug 24 10:02 PDT |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit                                             |                   |         |         |                     |                     |
	| start   | -p functional-622000                                                     | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:02 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:02:46
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:02:46.715279    3149 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:02:46.715467    3149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:02:46.715473    3149 out.go:358] Setting ErrFile to fd 2...
	I0819 10:02:46.715476    3149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:02:46.715649    3149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:02:46.717106    3149 out.go:352] Setting JSON to false
	I0819 10:02:46.739543    3149 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1936,"bootTime":1724085030,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:02:46.739637    3149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:02:46.761631    3149 out.go:177] * [functional-622000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:02:46.804362    3149 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:02:46.804421    3149 notify.go:220] Checking for updates...
	I0819 10:02:46.847125    3149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:02:46.868395    3149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:02:46.889188    3149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:02:46.931247    3149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:02:46.952016    3149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:02:46.974016    3149 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:02:46.974175    3149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:02:46.974828    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:46.974917    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:02:46.984546    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50192
	I0819 10:02:46.984906    3149 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:02:46.985340    3149 main.go:141] libmachine: Using API Version  1
	I0819 10:02:46.985351    3149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:02:46.985609    3149 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:02:46.985745    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.014206    3149 out.go:177] * Using the hyperkit driver based on existing profile
	I0819 10:02:47.056388    3149 start.go:297] selected driver: hyperkit
	I0819 10:02:47.056417    3149 start.go:901] validating driver "hyperkit" against &{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:02:47.056645    3149 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:02:47.056829    3149 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:02:47.057043    3149 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:02:47.066748    3149 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:02:47.070635    3149 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:47.070656    3149 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:02:47.073332    3149 cni.go:84] Creating CNI manager for ""
	I0819 10:02:47.073357    3149 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 10:02:47.073438    3149 start.go:340] cluster config:
	{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:02:47.073535    3149 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:02:47.116046    3149 out.go:177] * Starting "functional-622000" primary control-plane node in "functional-622000" cluster
	I0819 10:02:47.137321    3149 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:02:47.137398    3149 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:02:47.137437    3149 cache.go:56] Caching tarball of preloaded images
	I0819 10:02:47.137630    3149 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:02:47.137652    3149 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:02:47.137794    3149 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/config.json ...
	I0819 10:02:47.138761    3149 start.go:360] acquireMachinesLock for functional-622000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:02:47.138881    3149 start.go:364] duration metric: took 95.93µs to acquireMachinesLock for "functional-622000"
	I0819 10:02:47.138927    3149 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:02:47.138944    3149 fix.go:54] fixHost starting: 
	I0819 10:02:47.139354    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:47.139383    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:02:47.148422    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50194
	I0819 10:02:47.148784    3149 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:02:47.149127    3149 main.go:141] libmachine: Using API Version  1
	I0819 10:02:47.149154    3149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:02:47.149416    3149 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:02:47.149542    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.149650    3149 main.go:141] libmachine: (functional-622000) Calling .GetState
	I0819 10:02:47.149730    3149 main.go:141] libmachine: (functional-622000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:02:47.149822    3149 main.go:141] libmachine: (functional-622000) DBG | hyperkit pid from json: 3102
	I0819 10:02:47.150790    3149 fix.go:112] recreateIfNeeded on functional-622000: state=Running err=<nil>
	W0819 10:02:47.150805    3149 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:02:47.172224    3149 out.go:177] * Updating the running hyperkit "functional-622000" VM ...
	I0819 10:02:47.193060    3149 machine.go:93] provisionDockerMachine start ...
	I0819 10:02:47.193093    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.193438    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.193671    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.193895    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.194183    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.194389    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.194647    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.194938    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.194949    3149 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:02:47.257006    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622000
	
	I0819 10:02:47.257020    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.257151    3149 buildroot.go:166] provisioning hostname "functional-622000"
	I0819 10:02:47.257163    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.257264    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.257362    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.257459    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.257534    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.257627    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.257768    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.257923    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.257933    3149 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-622000 && echo "functional-622000" | sudo tee /etc/hostname
	I0819 10:02:47.330881    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622000
	
	I0819 10:02:47.330901    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.331043    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.331162    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.331251    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.331340    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.331465    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.331608    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.331620    3149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-622000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-622000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-622000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:02:47.392695    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:02:47.392714    3149 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:02:47.392730    3149 buildroot.go:174] setting up certificates
	I0819 10:02:47.392736    3149 provision.go:84] configureAuth start
	I0819 10:02:47.392747    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.392879    3149 main.go:141] libmachine: (functional-622000) Calling .GetIP
	I0819 10:02:47.392977    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.393055    3149 provision.go:143] copyHostCerts
	I0819 10:02:47.393086    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:02:47.393160    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:02:47.393169    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:02:47.393370    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:02:47.393581    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:02:47.393621    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:02:47.393626    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:02:47.393737    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:02:47.393914    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:02:47.393957    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:02:47.393962    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:02:47.394039    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:02:47.394180    3149 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.functional-622000 san=[127.0.0.1 192.169.0.4 functional-622000 localhost minikube]
	I0819 10:02:47.551861    3149 provision.go:177] copyRemoteCerts
	I0819 10:02:47.551924    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:02:47.551939    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.552077    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.552163    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.552249    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.552354    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:47.590340    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:02:47.590426    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:02:47.611171    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:02:47.611243    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 10:02:47.631670    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:02:47.631735    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:02:47.651195    3149 provision.go:87] duration metric: took 258.447258ms to configureAuth
	I0819 10:02:47.651207    3149 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:02:47.651340    3149 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:02:47.651354    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.651503    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.651612    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.651695    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.651787    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.651883    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.652007    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.652132    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.652140    3149 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:02:47.713196    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:02:47.713207    3149 buildroot.go:70] root file system type: tmpfs
	I0819 10:02:47.713274    3149 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:02:47.713289    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.713416    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.713502    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.713589    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.713668    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.713818    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.713957    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.714002    3149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:02:47.788841    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:02:47.788868    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.789014    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.789110    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.789218    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.789323    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.789459    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.789600    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.789615    3149 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:02:47.859208    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:02:47.859221    3149 machine.go:96] duration metric: took 666.140503ms to provisionDockerMachine
	I0819 10:02:47.859235    3149 start.go:293] postStartSetup for "functional-622000" (driver="hyperkit")
	I0819 10:02:47.859243    3149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:02:47.859253    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.859433    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:02:47.859447    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.859550    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.859628    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.859723    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.859805    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:47.897960    3149 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:02:47.900903    3149 command_runner.go:130] > NAME=Buildroot
	I0819 10:02:47.900911    3149 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 10:02:47.900915    3149 command_runner.go:130] > ID=buildroot
	I0819 10:02:47.900919    3149 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 10:02:47.900923    3149 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 10:02:47.901013    3149 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:02:47.901024    3149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:02:47.901125    3149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:02:47.901317    3149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:02:47.901324    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:02:47.901516    3149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts -> hosts in /etc/test/nested/copy/2174
	I0819 10:02:47.901521    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts -> /etc/test/nested/copy/2174/hosts
	I0819 10:02:47.901573    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2174
	I0819 10:02:47.908902    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:02:47.928770    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts --> /etc/test/nested/copy/2174/hosts (40 bytes)
	I0819 10:02:47.949590    3149 start.go:296] duration metric: took 90.345683ms for postStartSetup
	I0819 10:02:47.949608    3149 fix.go:56] duration metric: took 810.670757ms for fixHost
	I0819 10:02:47.949626    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.949765    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.949853    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.949932    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.950014    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.950145    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.950278    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.950285    3149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:02:48.015962    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724086968.201080300
	
	I0819 10:02:48.015973    3149 fix.go:216] guest clock: 1724086968.201080300
	I0819 10:02:48.015979    3149 fix.go:229] Guest: 2024-08-19 10:02:48.2010803 -0700 PDT Remote: 2024-08-19 10:02:47.949616 -0700 PDT m=+1.269337789 (delta=251.4643ms)
	I0819 10:02:48.015999    3149 fix.go:200] guest clock delta is within tolerance: 251.4643ms
	I0819 10:02:48.016003    3149 start.go:83] releasing machines lock for "functional-622000", held for 877.108871ms
	I0819 10:02:48.016022    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016177    3149 main.go:141] libmachine: (functional-622000) Calling .GetIP
	I0819 10:02:48.016275    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016589    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016695    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016767    3149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:02:48.016795    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:48.016806    3149 ssh_runner.go:195] Run: cat /version.json
	I0819 10:02:48.016817    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:48.016882    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:48.016971    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:48.016990    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:48.017080    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:48.017101    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:48.017164    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:48.017193    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:48.017328    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:48.049603    3149 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 10:02:48.049829    3149 ssh_runner.go:195] Run: systemctl --version
	I0819 10:02:48.095984    3149 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 10:02:48.096931    3149 command_runner.go:130] > systemd 252 (252)
	I0819 10:02:48.096961    3149 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 10:02:48.097053    3149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:02:48.102122    3149 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 10:02:48.102143    3149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:02:48.102177    3149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:02:48.110952    3149 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 10:02:48.110963    3149 start.go:495] detecting cgroup driver to use...
	I0819 10:02:48.111059    3149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:02:48.126457    3149 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0819 10:02:48.126734    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:02:48.135958    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:02:48.145231    3149 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:02:48.145276    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:02:48.154341    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:02:48.163160    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:02:48.171882    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:02:48.181115    3149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:02:48.190524    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:02:48.200851    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:02:48.209942    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:02:48.219031    3149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:02:48.227175    3149 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 10:02:48.227346    3149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:02:48.235625    3149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:02:48.388843    3149 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:02:48.408053    3149 start.go:495] detecting cgroup driver to use...
	I0819 10:02:48.408141    3149 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:02:48.422240    3149 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0819 10:02:48.422854    3149 command_runner.go:130] > [Unit]
	I0819 10:02:48.422864    3149 command_runner.go:130] > Description=Docker Application Container Engine
	I0819 10:02:48.422868    3149 command_runner.go:130] > Documentation=https://docs.docker.com
	I0819 10:02:48.422873    3149 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0819 10:02:48.422878    3149 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0819 10:02:48.422882    3149 command_runner.go:130] > StartLimitBurst=3
	I0819 10:02:48.422886    3149 command_runner.go:130] > StartLimitIntervalSec=60
	I0819 10:02:48.422890    3149 command_runner.go:130] > [Service]
	I0819 10:02:48.422896    3149 command_runner.go:130] > Type=notify
	I0819 10:02:48.422900    3149 command_runner.go:130] > Restart=on-failure
	I0819 10:02:48.422906    3149 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0819 10:02:48.422914    3149 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0819 10:02:48.422920    3149 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0819 10:02:48.422926    3149 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0819 10:02:48.422932    3149 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0819 10:02:48.422942    3149 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0819 10:02:48.422948    3149 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0819 10:02:48.422956    3149 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0819 10:02:48.422962    3149 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0819 10:02:48.422966    3149 command_runner.go:130] > ExecStart=
	I0819 10:02:48.422983    3149 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0819 10:02:48.422987    3149 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0819 10:02:48.422994    3149 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0819 10:02:48.423000    3149 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0819 10:02:48.423003    3149 command_runner.go:130] > LimitNOFILE=infinity
	I0819 10:02:48.423011    3149 command_runner.go:130] > LimitNPROC=infinity
	I0819 10:02:48.423015    3149 command_runner.go:130] > LimitCORE=infinity
	I0819 10:02:48.423019    3149 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0819 10:02:48.423024    3149 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0819 10:02:48.423027    3149 command_runner.go:130] > TasksMax=infinity
	I0819 10:02:48.423030    3149 command_runner.go:130] > TimeoutStartSec=0
	I0819 10:02:48.423035    3149 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0819 10:02:48.423039    3149 command_runner.go:130] > Delegate=yes
	I0819 10:02:48.423043    3149 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0819 10:02:48.423047    3149 command_runner.go:130] > KillMode=process
	I0819 10:02:48.423050    3149 command_runner.go:130] > [Install]
	I0819 10:02:48.423059    3149 command_runner.go:130] > WantedBy=multi-user.target
	I0819 10:02:48.423191    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:02:48.438160    3149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:02:48.458938    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:02:48.471298    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:02:48.481842    3149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:02:48.498207    3149 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0819 10:02:48.498560    3149 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:02:48.501580    3149 command_runner.go:130] > /usr/bin/cri-dockerd
	I0819 10:02:48.501729    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:02:48.508831    3149 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:02:48.522701    3149 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:02:48.665555    3149 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:02:48.815200    3149 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:02:48.815277    3149 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:02:48.832404    3149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:02:48.960435    3149 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:04:00.136198    3149 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0819 10:04:00.136213    3149 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0819 10:04:00.136223    3149 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.17566847s)
	I0819 10:04:00.136284    3149 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:04:00.148256    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.148298    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.179943585Z" level=info msg="Starting up"
	I0819 10:04:00.148306    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.180942482Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.148320    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.181508233Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=529
	I0819 10:04:00.148330    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.197101767Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.148340    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212309114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.148351    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212331640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.148359    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212367467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.148370    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212377477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148381    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212427828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148392    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212459845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148418    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212614080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148438    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212648283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148455    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212660789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148466    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212668790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148479    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212725662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148490    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212870308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148504    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214380176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148513    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214415646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148540    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214516813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148550    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214549580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.148560    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214611309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.148568    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214671792Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.148578    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216534676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.148586    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216610115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.148595    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216626522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.148604    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216638444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.148612    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216647918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.148621    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216733763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.148630    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216945239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.148638    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217040348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.148647    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217073947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.148656    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217084934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.148672    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217096633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148682    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217105205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148691    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217112660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148700    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217121182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148709    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217136065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148720    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217146862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148729    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217154975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148811    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217162140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148823    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217174944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148831    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217184058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148840    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217193346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148849    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217205266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148858    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217214712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148867    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217222710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148876    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217230703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148884    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217238674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148893    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217246762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148902    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217255635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148911    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217263095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148920    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217270770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148928    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217278425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148942    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217287600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.148951    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217301045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148959    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217309187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148968    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217316720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.148977    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217362662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.148989    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217376693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.148999    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217384264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.149127    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217392026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.149138    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217398807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149151    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217406542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.149159    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217413058Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.149168    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217541797Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.149175    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217596199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.149183    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217626417Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.149191    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217704249Z" level=info msg="containerd successfully booted in 0.021235s"
	I0819 10:04:00.149204    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.213638513Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.149212    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.218697243Z" level=info msg="Loading containers: start."
	I0819 10:04:00.149230    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.303833103Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.149242    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.394776557Z" level=info msg="Loading containers: done."
	I0819 10:04:00.149252    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.401999290Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.149259    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.402083612Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.149267    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430356737Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.149273    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.149280    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430518481Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.149286    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.592352095Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.149293    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593517361Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.149302    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593620938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.149310    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593657991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.149320    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.594083691Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0819 10:04:00.149325    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.149331    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.149336    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.149341    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.149347    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.633757457Z" level=info msg="Starting up"
	I0819 10:04:00.149464    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634184054Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.149477    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634821921Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=873
	I0819 10:04:00.149486    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.653253192Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.149496    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670539137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.149505    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670588711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.149514    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670618159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.149523    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670627892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149534    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670647557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149546    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670655607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149561    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670761247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149571    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670822043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149582    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670833696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149592    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670840772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149601    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670856847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149610    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670937210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149624    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672479320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149633    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672517250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149656    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672598536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149665    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672608718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.149674    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672627499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.149682    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672639411Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.149690    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672775631Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.149699    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672821269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.149713    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672833738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.149723    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672843249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.149732    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672853396Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.149740    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672882179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.149753    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673016560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.149761    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673078296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.149771    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673089866Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.149780    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673100402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.149790    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673108857Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149799    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673116983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149808    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673124628Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149817    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673133352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149830    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673141618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149840    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673150296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149848    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673158127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149857    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673165754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149938    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149950    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673407110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149959    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673425300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149968    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673438713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149976    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673449750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149986    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673459416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149994    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673470226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150003    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673482043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150018    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673493250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150027    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673506067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150035    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673516910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150044    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673527469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150053    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673573561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150061    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673591400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.150074    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673631719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150083    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673719578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150092    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150101    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673789779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.150113    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673825158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.150122    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673835448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150133    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673846514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.150146    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673856283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150264    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673868043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.150275    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673875479Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.150284    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674416665Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.150292    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674488718Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.150300    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674551662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.150307    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674591532Z" level=info msg="containerd successfully booted in 0.021887s"
	I0819 10:04:00.150315    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.701018022Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.150322    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.703929003Z" level=info msg="Loading containers: start."
	I0819 10:04:00.150338    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.774231260Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.150349    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.832584697Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0819 10:04:00.150362    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.874250689Z" level=info msg="Loading containers: done."
	I0819 10:04:00.150374    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884709929Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.150382    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884767272Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.150389    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907293087Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.150396    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907348774Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.150402    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.150412    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.150420    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.019481735Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.150429    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020418313Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0819 10:04:00.150437    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020517778Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.150446    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020639216Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.150455    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020676616Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.150461    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.150467    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.150473    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.150480    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.052721036Z" level=info msg="Starting up"
	I0819 10:04:00.150599    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.053665999Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.150613    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.054204471Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1227
	I0819 10:04:00.150627    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.071110001Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.150637    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086417619Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.150645    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086519393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150655    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086575826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.150664    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086609098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150675    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086649285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150684    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086679999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150700    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086800826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150710    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086837952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150721    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086867954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150730    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086894854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150739    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086930771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150748    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.087026239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150763    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088598589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150772    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088650891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150786    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088784035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150795    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088826554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.150805    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088863800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.150813    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088900283Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.150821    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089048412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.150830    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089096938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.150839    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089133463Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.150849    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089178884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.150858    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089213509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.150866    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089263884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.150875    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089475204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.150883    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089597981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.150892    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089639022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.150902    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089670206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.150912    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089699866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150921    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089728982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150930    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089757898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150939    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089787686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150948    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089821007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150958    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089859340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150969    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089892427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150982    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089920146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.151044    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089960280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151058    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089995294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151067    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090025807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151076    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090055021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151085    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090088517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151095    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090119075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151104    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090147596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151113    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151122    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090215944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151130    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090256138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151139    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090288110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151148    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090316417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151156    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090344756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151164    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090386745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.151173    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090425469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151182    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090489354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151191    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090525304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.151200    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090598037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.151215    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090641245Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.151225    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090672551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.151238    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090701383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.151350    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090729639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151361    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090758285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.151380    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090785175Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.151390    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090962205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.151398    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091049960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.151406    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091113179Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.151414    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091149051Z" level=info msg="containerd successfully booted in 0.020375s"
	I0819 10:04:00.151422    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.080403371Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.151429    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.185866595Z" level=info msg="Loading containers: start."
	I0819 10:04:00.151445    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.255656572Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.151456    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.313204760Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0819 10:04:00.151464    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.358744224Z" level=info msg="Loading containers: done."
	I0819 10:04:00.151474    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365948882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.151483    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365999910Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.151496    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384916152Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.151504    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384992962Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.151510    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.151519    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237378813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151531    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237442064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151541    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237454926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151551    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237547247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151563    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240823938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151616    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240944115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151631    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240972248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151641    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.241074980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151653    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251431426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151663    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251590345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151673    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251601329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151683    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251683938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151693    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253924695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151704    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253986191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151714    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253999192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151724    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.254059512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151734    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444251009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151744    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444317593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151754    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444336465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151767    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444427584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151777    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458785591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151787    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458823990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151805    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151815    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458891334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151865    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477642840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151878    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477748278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151887    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477759630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151896    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477819081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151908    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480734366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151918    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480804224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151928    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480826831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151938    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480950777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151948    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561746494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151962    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561814928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151972    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561824738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151982    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561890303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151993    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765174254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152004    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765250994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152013    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765324828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152023    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765477954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152035    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798811898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152045    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798944640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152055    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798957582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152134    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.799103034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152147    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881637043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152158    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881920803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152170    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882025155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152180    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152190    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402231252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152200    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402303190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152214    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402316565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152224    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402385693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152234    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418387475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152244    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418603733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152254    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418627856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152263    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418851110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152273    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907392815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152283    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907863518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152297    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908056887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152307    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152317    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989553144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152327    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989622168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152413    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989632381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152425    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.992038509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152439    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.526555515Z" level=info msg="ignoring event" container=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152449    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527066255Z" level=info msg="shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	I0819 10:04:00.152459    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527315561Z" level=warning msg="cleaning up after shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	I0819 10:04:00.152467    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527360670Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152479    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.607857375Z" level=info msg="ignoring event" container=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152493    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608302054Z" level=info msg="shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	I0819 10:04:00.152503    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608658326Z" level=warning msg="cleaning up after shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	I0819 10:04:00.152514    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608740170Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152521    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.158148283Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.152532    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.152543    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268535097Z" level=info msg="shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	I0819 10:04:00.152555    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.268717864Z" level=info msg="ignoring event" container=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152567    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268586609Z" level=warning msg="cleaning up after shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	I0819 10:04:00.152575    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268964831Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152590    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.273347289Z" level=info msg="ignoring event" container=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152599    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.273955655Z" level=info msg="shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	I0819 10:04:00.152609    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274023465Z" level=warning msg="cleaning up after shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	I0819 10:04:00.152617    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274033869Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152761    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290067625Z" level=info msg="ignoring event" container=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152775    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290112205Z" level=info msg="ignoring event" container=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152785    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290424043Z" level=info msg="shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	I0819 10:04:00.152800    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290536979Z" level=warning msg="cleaning up after shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	I0819 10:04:00.152808    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290582368Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152817    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290465882Z" level=info msg="shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	I0819 10:04:00.152828    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290733155Z" level=warning msg="cleaning up after shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	I0819 10:04:00.152836    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290741439Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152847    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291499508Z" level=info msg="ignoring event" container=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152858    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291535224Z" level=info msg="ignoring event" container=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152866    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290595808Z" level=info msg="shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	I0819 10:04:00.152876    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297716002Z" level=warning msg="cleaning up after shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	I0819 10:04:00.152883    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297725076Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152895    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297983983Z" level=info msg="shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	I0819 10:04:00.152904    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298045597Z" level=warning msg="cleaning up after shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	I0819 10:04:00.152912    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298148865Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152925    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302154900Z" level=info msg="ignoring event" container=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152937    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302226976Z" level=info msg="ignoring event" container=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152946    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302717446Z" level=info msg="shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	I0819 10:04:00.152957    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302759085Z" level=warning msg="cleaning up after shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	I0819 10:04:00.152965    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302767629Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152974    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308068913Z" level=info msg="shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	I0819 10:04:00.152984    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308118671Z" level=warning msg="cleaning up after shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	I0819 10:04:00.152996    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308328329Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153006    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311243798Z" level=info msg="shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	I0819 10:04:00.153016    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311327236Z" level=warning msg="cleaning up after shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	I0819 10:04:00.153024    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311335697Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153042    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316752567Z" level=info msg="ignoring event" container=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153053    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316800043Z" level=info msg="ignoring event" container=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153069    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316819263Z" level=info msg="ignoring event" container=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153079    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317249898Z" level=info msg="shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	I0819 10:04:00.153093    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317359801Z" level=warning msg="cleaning up after shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	I0819 10:04:00.153106    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317369184Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153116    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321910919Z" level=info msg="shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	I0819 10:04:00.153126    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321963437Z" level=warning msg="cleaning up after shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	I0819 10:04:00.153134    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321972279Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153147    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.343145333Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153159    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.343891870Z" level=info msg="ignoring event" container=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153175    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.344047521Z" level=info msg="shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	I0819 10:04:00.153190    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345641889Z" level=warning msg="cleaning up after shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	I0819 10:04:00.153200    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345650213Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153213    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.353197511Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153227    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.354463589Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153243    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.366627155Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153256    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.401735781Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153269    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1220]: time="2024-08-19T17:02:54.221061363Z" level=info msg="ignoring event" container=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153279    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221240161Z" level=info msg="shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	I0819 10:04:00.153290    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221269867Z" level=warning msg="cleaning up after shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	I0819 10:04:00.153297    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221276283Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153312    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.230654326Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd
	I0819 10:04:00.153323    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.274755484Z" level=info msg="ignoring event" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153334    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275154472Z" level=info msg="shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	I0819 10:04:00.153345    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275772857Z" level=warning msg="cleaning up after shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	I0819 10:04:00.153361    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275815643Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153372    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.299808564Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0819 10:04:00.153379    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300197939Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.153414    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300259721Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.153426    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300281777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.153433    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.153439    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.153445    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Consumed 2.502s CPU time.
	I0819 10:04:00.153454    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.153461    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 dockerd[3529]: time="2024-08-19T17:03:00.342173492Z" level=info msg="Starting up"
	I0819 10:04:00.153471    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 dockerd[3529]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0819 10:04:00.153480    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0819 10:04:00.153486    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0819 10:04:00.153492    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	I0819 10:04:00.188229    3149 out.go:201] 
	W0819 10:04:00.209936    3149 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:01:44 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.179943585Z" level=info msg="Starting up"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.180942482Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.181508233Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=529
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.197101767Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212309114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212331640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212367467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212377477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212427828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212459845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212614080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212648283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212660789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212668790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212725662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212870308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214380176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214415646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214516813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214549580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214611309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214671792Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216534676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216610115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216626522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216638444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216647918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216733763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216945239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217040348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217073947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217084934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217096633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217105205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217112660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217121182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217136065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217146862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217154975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217162140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217174944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217184058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217193346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217205266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217214712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217222710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217230703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217238674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217246762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217255635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217263095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217270770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217278425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217287600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217301045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217309187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217316720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217362662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217376693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217384264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217392026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217398807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217406542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217413058Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217541797Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217596199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217626417Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217704249Z" level=info msg="containerd successfully booted in 0.021235s"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.213638513Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.218697243Z" level=info msg="Loading containers: start."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.303833103Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.394776557Z" level=info msg="Loading containers: done."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.401999290Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.402083612Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430356737Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:45 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430518481Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.592352095Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593517361Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593620938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593657991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.594083691Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 19 17:01:46 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:47 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:47 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.633757457Z" level=info msg="Starting up"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634184054Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634821921Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=873
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.653253192Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670539137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670588711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670618159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670627892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670647557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670655607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670761247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670822043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670833696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670840772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670856847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670937210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672479320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672517250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672598536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672608718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672627499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672639411Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672775631Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672821269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672833738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672843249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672853396Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672882179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673016560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673078296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673089866Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673100402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673108857Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673116983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673124628Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673133352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673141618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673150296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673158127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673165754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673407110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673425300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673438713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673449750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673459416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673470226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673482043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673493250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673506067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673516910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673527469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673573561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673591400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673631719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673719578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673789779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673825158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673835448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673846514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673856283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673868043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673875479Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674416665Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674488718Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674551662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674591532Z" level=info msg="containerd successfully booted in 0.021887s"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.701018022Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.703929003Z" level=info msg="Loading containers: start."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.774231260Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.832584697Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.874250689Z" level=info msg="Loading containers: done."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884709929Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884767272Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907293087Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907348774Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:48 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:53 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.019481735Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020418313Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020517778Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020639216Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020676616Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:54 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:54 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:54 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.052721036Z" level=info msg="Starting up"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.053665999Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.054204471Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1227
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.071110001Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086417619Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086519393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086575826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086609098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086649285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086679999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086800826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086837952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086867954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086894854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086930771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.087026239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088598589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088650891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088784035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088826554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088863800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088900283Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089048412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089096938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089133463Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089178884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089213509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089263884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089475204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089597981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089639022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089670206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089699866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089728982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089757898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089787686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089821007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089859340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089892427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089920146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089960280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089995294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090025807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090055021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090088517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090119075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090147596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090215944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090256138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090288110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090316417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090344756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090386745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090425469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090489354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090525304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090598037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090641245Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090672551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090701383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090729639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090758285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090785175Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090962205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091049960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091113179Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091149051Z" level=info msg="containerd successfully booted in 0.020375s"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.080403371Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.185866595Z" level=info msg="Loading containers: start."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.255656572Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.313204760Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.358744224Z" level=info msg="Loading containers: done."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365948882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365999910Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384916152Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384992962Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:55 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237378813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237442064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237454926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237547247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240823938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240944115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240972248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.241074980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251431426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251590345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251601329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251683938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253924695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253986191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253999192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.254059512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444251009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444317593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444336465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444427584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458785591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458823990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458891334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477642840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477748278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477759630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477819081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480734366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480804224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480826831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480950777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561746494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561814928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561824738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561890303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765174254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765250994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765324828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765477954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798811898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798944640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798957582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.799103034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881637043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881920803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882025155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402231252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402303190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402316565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402385693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418387475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418603733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418627856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418851110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907392815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907863518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908056887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989553144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989622168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989632381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.992038509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.526555515Z" level=info msg="ignoring event" container=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527066255Z" level=info msg="shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527315561Z" level=warning msg="cleaning up after shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527360670Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.607857375Z" level=info msg="ignoring event" container=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608302054Z" level=info msg="shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608658326Z" level=warning msg="cleaning up after shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608740170Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.158148283Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:02:49 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268535097Z" level=info msg="shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.268717864Z" level=info msg="ignoring event" container=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268586609Z" level=warning msg="cleaning up after shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268964831Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.273347289Z" level=info msg="ignoring event" container=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.273955655Z" level=info msg="shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274023465Z" level=warning msg="cleaning up after shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274033869Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290067625Z" level=info msg="ignoring event" container=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290112205Z" level=info msg="ignoring event" container=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290424043Z" level=info msg="shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290536979Z" level=warning msg="cleaning up after shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290582368Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290465882Z" level=info msg="shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290733155Z" level=warning msg="cleaning up after shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290741439Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291499508Z" level=info msg="ignoring event" container=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291535224Z" level=info msg="ignoring event" container=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290595808Z" level=info msg="shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297716002Z" level=warning msg="cleaning up after shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297725076Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297983983Z" level=info msg="shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298045597Z" level=warning msg="cleaning up after shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298148865Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302154900Z" level=info msg="ignoring event" container=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302226976Z" level=info msg="ignoring event" container=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302717446Z" level=info msg="shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302759085Z" level=warning msg="cleaning up after shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302767629Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308068913Z" level=info msg="shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308118671Z" level=warning msg="cleaning up after shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308328329Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311243798Z" level=info msg="shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311327236Z" level=warning msg="cleaning up after shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311335697Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316752567Z" level=info msg="ignoring event" container=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316800043Z" level=info msg="ignoring event" container=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316819263Z" level=info msg="ignoring event" container=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317249898Z" level=info msg="shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317359801Z" level=warning msg="cleaning up after shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317369184Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321910919Z" level=info msg="shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321963437Z" level=warning msg="cleaning up after shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321972279Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.343145333Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.343891870Z" level=info msg="ignoring event" container=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.344047521Z" level=info msg="shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345641889Z" level=warning msg="cleaning up after shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345650213Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.353197511Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.354463589Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.366627155Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.401735781Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1220]: time="2024-08-19T17:02:54.221061363Z" level=info msg="ignoring event" container=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221240161Z" level=info msg="shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221269867Z" level=warning msg="cleaning up after shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221276283Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.230654326Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.274755484Z" level=info msg="ignoring event" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275154472Z" level=info msg="shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275772857Z" level=warning msg="cleaning up after shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275815643Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.299808564Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300197939Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300259721Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300281777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:03:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Consumed 2.502s CPU time.
	Aug 19 17:03:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:03:00 functional-622000 dockerd[3529]: time="2024-08-19T17:03:00.342173492Z" level=info msg="Starting up"
	Aug 19 17:04:00 functional-622000 dockerd[3529]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:04:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:04:00.210429    3149 out.go:270] * 
	W0819 10:04:00.211654    3149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:04:00.274709    3149 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:06:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:06:00 functional-622000 dockerd[4227]: time="2024-08-19T17:06:00.998133929Z" level=info msg="Starting up"
	Aug 19 17:07:01 functional-622000 dockerd[4227]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="error getting RW layer size for container ID '9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5'"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="error getting RW layer size for container ID '5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547'"
	Aug 19 17:07:01 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="error getting RW layer size for container ID '6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5'"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="error getting RW layer size for container ID 'be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa'"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="error getting RW layer size for container ID 'ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd'"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="error getting RW layer size for container ID 'c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:07:01 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90'"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="error getting RW layer size for container ID 'd997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:07:01 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:07:01Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824'"
	Aug 19 17:07:01 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 19 17:07:01 functional-622000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Aug 19 17:07:01 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:07:01 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-19T17:07:03Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.100608] systemd-fstab-generator[514]: Ignoring "noauto" option for root device
	[  +1.943533] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +0.277412] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.100828] systemd-fstab-generator[844]: Ignoring "noauto" option for root device
	[  +0.052131] kauditd_printk_skb: 117 callbacks suppressed
	[  +0.061352] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +2.454350] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +0.095628] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.097890] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +0.135254] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +3.642141] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.053482] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.547324] systemd-fstab-generator[1462]: Ignoring "noauto" option for root device
	[  +3.456953] systemd-fstab-generator[1592]: Ignoring "noauto" option for root device
	[  +0.049385] kauditd_printk_skb: 70 callbacks suppressed
	[Aug19 17:02] systemd-fstab-generator[1997]: Ignoring "noauto" option for root device
	[  +0.071304] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.815922] systemd-fstab-generator[2131]: Ignoring "noauto" option for root device
	[  +0.113741] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.001342] kauditd_printk_skb: 98 callbacks suppressed
	[ +26.946888] systemd-fstab-generator[3048]: Ignoring "noauto" option for root device
	[  +0.280843] systemd-fstab-generator[3084]: Ignoring "noauto" option for root device
	[  +0.156587] systemd-fstab-generator[3096]: Ignoring "noauto" option for root device
	[  +0.148300] systemd-fstab-generator[3110]: Ignoring "noauto" option for root device
	[  +5.168584] kauditd_printk_skb: 91 callbacks suppressed
	
	
	==> kernel <==
	 17:08:01 up 6 min,  0 users,  load average: 0.00, 0.08, 0.06
	Linux functional-622000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 17:08:01 functional-622000 kubelet[2004]: I0819 17:08:01.196924    2004 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.196971    2004 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.197010    2004 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.197162    2004 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: I0819 17:08:01.197284    2004 setters.go:600] "Node became not ready" node="functional-622000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-08-19T17:08:01Z","lastTransitionTime":"2024-08-19T17:08:01Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 5m12.704509879s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @-\u003e/var/run/docker.sock: read: connection reset by peer]"}
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.198051    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.198099    2004 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.198227    2004 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.198260    2004 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: I0819 17:08:01.198269    2004 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.198281    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.198295    2004 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.198374    2004 kubelet.go:2911] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.198640    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.198703    2004 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.198819    2004 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.198923    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:08:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:08:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:08:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2024-08-19T17:08:01Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 5m12.704509879s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to g
et docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/var/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-622000\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000/status?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.199082    2004 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.199148    2004 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.199192    2004 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.199791    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.200284    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.200828    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.201301    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:08:01 functional-622000 kubelet[2004]: E0819 17:08:01.201342    2004 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 10:07:00.678957    3270 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:07:00.693273    3270 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:07:00.705519    3270 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:07:00.717719    3270 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:07:00.729539    3270 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:07:00.740244    3270 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:07:00.752120    3270 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:07:00.763276    3270 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-622000 -n functional-622000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-622000 -n functional-622000: exit status 2 (150.888747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-622000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (120.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 ssh sudo crictl images: exit status 1 (2.159621748s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1126: failed to get images by "out/minikube-darwin-amd64 -p functional-622000 ssh sudo crictl images" ssh exit status 1
functional_test.go:1130: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (180.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh sudo docker rmi registry.k8s.io/pause:latest
E0819 10:15:28.992134    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 1 (57.689442204s)

                                                
                                                
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-amd64 -p functional-622000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 1
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (2.157003378s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 cache reload
E0819 10:16:52.078650    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1158: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 cache reload: (1m58.240698484s)
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (2.165901231s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1165: expected "out/minikube-darwin-amd64 -p functional-622000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (180.25s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (120.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 kubectl -- --context functional-622000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 kubectl -- --context functional-622000 get pods: exit status 1 (1.666527668s)

                                                
                                                
** stderr ** 
	E0819 10:20:05.228581    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	E0819 10:20:05.331334    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	E0819 10:20:05.434030    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	E0819 10:20:05.536230    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	E0819 10:20:05.638244    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	The connection to the server 192.169.0.4:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-amd64 -p functional-622000 kubectl -- --context functional-622000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-622000 -n functional-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-622000 -n functional-622000: exit status 2 (164.587857ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 logs -n 25
E0819 10:20:28.998502    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 logs -n 25: (1m58.468297555s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                              Args                              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | pause                                                          |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 10:00 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 10:00 PDT | 19 Aug 24 10:01 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| delete  | -p nospam-492000                                               | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 10:01 PDT | 19 Aug 24 10:01 PDT |
	| start   | -p functional-622000                                           | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:01 PDT | 19 Aug 24 10:02 PDT |
	|         | --memory=4000                                                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                          |                   |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit                                   |                   |         |         |                     |                     |
	| start   | -p functional-622000                                           | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:02 PDT |                     |
	|         | --alsologtostderr -v=8                                         |                   |         |         |                     |                     |
	| cache   | functional-622000 cache add                                    | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:08 PDT | 19 Aug 24 10:10 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | functional-622000 cache add                                    | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:10 PDT | 19 Aug 24 10:12 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | functional-622000 cache add                                    | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:12 PDT | 19 Aug 24 10:14 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-622000 cache add                                    | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:14 PDT | 19 Aug 24 10:15 PDT |
	|         | minikube-local-cache-test:functional-622000                    |                   |         |         |                     |                     |
	| cache   | functional-622000 cache delete                                 | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:15 PDT | 19 Aug 24 10:15 PDT |
	|         | minikube-local-cache-test:functional-622000                    |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 19 Aug 24 10:15 PDT | 19 Aug 24 10:15 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | list                                                           | minikube          | jenkins | v1.33.1 | 19 Aug 24 10:15 PDT | 19 Aug 24 10:15 PDT |
	| ssh     | functional-622000 ssh sudo                                     | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:15 PDT |                     |
	|         | crictl images                                                  |                   |         |         |                     |                     |
	| ssh     | functional-622000                                              | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:15 PDT |                     |
	|         | ssh sudo docker rmi                                            |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| ssh     | functional-622000 ssh                                          | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:16 PDT |                     |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-622000 cache reload                                 | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:16 PDT | 19 Aug 24 10:18 PDT |
	| ssh     | functional-622000 ssh                                          | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:18 PDT |                     |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 19 Aug 24 10:18 PDT | 19 Aug 24 10:18 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 19 Aug 24 10:18 PDT | 19 Aug 24 10:18 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| kubectl | functional-622000 kubectl --                                   | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:20 PDT |                     |
	|         | --context functional-622000                                    |                   |         |         |                     |                     |
	|         | get pods                                                       |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:02:46
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:02:46.715279    3149 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:02:46.715467    3149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:02:46.715473    3149 out.go:358] Setting ErrFile to fd 2...
	I0819 10:02:46.715476    3149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:02:46.715649    3149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:02:46.717106    3149 out.go:352] Setting JSON to false
	I0819 10:02:46.739543    3149 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1936,"bootTime":1724085030,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:02:46.739637    3149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:02:46.761631    3149 out.go:177] * [functional-622000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:02:46.804362    3149 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:02:46.804421    3149 notify.go:220] Checking for updates...
	I0819 10:02:46.847125    3149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:02:46.868395    3149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:02:46.889188    3149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:02:46.931247    3149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:02:46.952016    3149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:02:46.974016    3149 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:02:46.974175    3149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:02:46.974828    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:46.974917    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:02:46.984546    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50192
	I0819 10:02:46.984906    3149 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:02:46.985340    3149 main.go:141] libmachine: Using API Version  1
	I0819 10:02:46.985351    3149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:02:46.985609    3149 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:02:46.985745    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.014206    3149 out.go:177] * Using the hyperkit driver based on existing profile
	I0819 10:02:47.056388    3149 start.go:297] selected driver: hyperkit
	I0819 10:02:47.056417    3149 start.go:901] validating driver "hyperkit" against &{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:02:47.056645    3149 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:02:47.056829    3149 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:02:47.057043    3149 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:02:47.066748    3149 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:02:47.070635    3149 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:47.070656    3149 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:02:47.073332    3149 cni.go:84] Creating CNI manager for ""
	I0819 10:02:47.073357    3149 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 10:02:47.073438    3149 start.go:340] cluster config:
	{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:02:47.073535    3149 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:02:47.116046    3149 out.go:177] * Starting "functional-622000" primary control-plane node in "functional-622000" cluster
	I0819 10:02:47.137321    3149 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:02:47.137398    3149 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:02:47.137437    3149 cache.go:56] Caching tarball of preloaded images
	I0819 10:02:47.137630    3149 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:02:47.137652    3149 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:02:47.137794    3149 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/config.json ...
	I0819 10:02:47.138761    3149 start.go:360] acquireMachinesLock for functional-622000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:02:47.138881    3149 start.go:364] duration metric: took 95.93µs to acquireMachinesLock for "functional-622000"
	I0819 10:02:47.138927    3149 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:02:47.138944    3149 fix.go:54] fixHost starting: 
	I0819 10:02:47.139354    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:47.139383    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:02:47.148422    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50194
	I0819 10:02:47.148784    3149 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:02:47.149127    3149 main.go:141] libmachine: Using API Version  1
	I0819 10:02:47.149154    3149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:02:47.149416    3149 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:02:47.149542    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.149650    3149 main.go:141] libmachine: (functional-622000) Calling .GetState
	I0819 10:02:47.149730    3149 main.go:141] libmachine: (functional-622000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:02:47.149822    3149 main.go:141] libmachine: (functional-622000) DBG | hyperkit pid from json: 3102
	I0819 10:02:47.150790    3149 fix.go:112] recreateIfNeeded on functional-622000: state=Running err=<nil>
	W0819 10:02:47.150805    3149 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:02:47.172224    3149 out.go:177] * Updating the running hyperkit "functional-622000" VM ...
	I0819 10:02:47.193060    3149 machine.go:93] provisionDockerMachine start ...
	I0819 10:02:47.193093    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.193438    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.193671    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.193895    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.194183    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.194389    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.194647    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.194938    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.194949    3149 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:02:47.257006    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622000
	
	I0819 10:02:47.257020    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.257151    3149 buildroot.go:166] provisioning hostname "functional-622000"
	I0819 10:02:47.257163    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.257264    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.257362    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.257459    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.257534    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.257627    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.257768    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.257923    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.257933    3149 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-622000 && echo "functional-622000" | sudo tee /etc/hostname
	I0819 10:02:47.330881    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622000
	
	I0819 10:02:47.330901    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.331043    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.331162    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.331251    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.331340    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.331465    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.331608    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.331620    3149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-622000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-622000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-622000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:02:47.392695    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:02:47.392714    3149 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:02:47.392730    3149 buildroot.go:174] setting up certificates
	I0819 10:02:47.392736    3149 provision.go:84] configureAuth start
	I0819 10:02:47.392747    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.392879    3149 main.go:141] libmachine: (functional-622000) Calling .GetIP
	I0819 10:02:47.392977    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.393055    3149 provision.go:143] copyHostCerts
	I0819 10:02:47.393086    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:02:47.393160    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:02:47.393169    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:02:47.393370    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:02:47.393581    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:02:47.393621    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:02:47.393626    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:02:47.393737    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:02:47.393914    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:02:47.393957    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:02:47.393962    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:02:47.394039    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:02:47.394180    3149 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.functional-622000 san=[127.0.0.1 192.169.0.4 functional-622000 localhost minikube]
	I0819 10:02:47.551861    3149 provision.go:177] copyRemoteCerts
	I0819 10:02:47.551924    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:02:47.551939    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.552077    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.552163    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.552249    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.552354    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:47.590340    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:02:47.590426    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:02:47.611171    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:02:47.611243    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 10:02:47.631670    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:02:47.631735    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:02:47.651195    3149 provision.go:87] duration metric: took 258.447258ms to configureAuth
	I0819 10:02:47.651207    3149 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:02:47.651340    3149 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:02:47.651354    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.651503    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.651612    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.651695    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.651787    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.651883    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.652007    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.652132    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.652140    3149 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:02:47.713196    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:02:47.713207    3149 buildroot.go:70] root file system type: tmpfs
	I0819 10:02:47.713274    3149 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:02:47.713289    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.713416    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.713502    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.713589    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.713668    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.713818    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.713957    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.714002    3149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:02:47.788841    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:02:47.788868    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.789014    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.789110    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.789218    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.789323    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.789459    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.789600    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.789615    3149 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:02:47.859208    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:02:47.859221    3149 machine.go:96] duration metric: took 666.140503ms to provisionDockerMachine
	I0819 10:02:47.859235    3149 start.go:293] postStartSetup for "functional-622000" (driver="hyperkit")
	I0819 10:02:47.859243    3149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:02:47.859253    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.859433    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:02:47.859447    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.859550    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.859628    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.859723    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.859805    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:47.897960    3149 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:02:47.900903    3149 command_runner.go:130] > NAME=Buildroot
	I0819 10:02:47.900911    3149 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 10:02:47.900915    3149 command_runner.go:130] > ID=buildroot
	I0819 10:02:47.900919    3149 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 10:02:47.900923    3149 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 10:02:47.901013    3149 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:02:47.901024    3149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:02:47.901125    3149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:02:47.901317    3149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:02:47.901324    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:02:47.901516    3149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts -> hosts in /etc/test/nested/copy/2174
	I0819 10:02:47.901521    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts -> /etc/test/nested/copy/2174/hosts
	I0819 10:02:47.901573    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2174
	I0819 10:02:47.908902    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:02:47.928770    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts --> /etc/test/nested/copy/2174/hosts (40 bytes)
	I0819 10:02:47.949590    3149 start.go:296] duration metric: took 90.345683ms for postStartSetup
	I0819 10:02:47.949608    3149 fix.go:56] duration metric: took 810.670757ms for fixHost
	I0819 10:02:47.949626    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.949765    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.949853    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.949932    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.950014    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.950145    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.950278    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.950285    3149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:02:48.015962    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724086968.201080300
	
	I0819 10:02:48.015973    3149 fix.go:216] guest clock: 1724086968.201080300
	I0819 10:02:48.015979    3149 fix.go:229] Guest: 2024-08-19 10:02:48.2010803 -0700 PDT Remote: 2024-08-19 10:02:47.949616 -0700 PDT m=+1.269337789 (delta=251.4643ms)
	I0819 10:02:48.015999    3149 fix.go:200] guest clock delta is within tolerance: 251.4643ms
	I0819 10:02:48.016003    3149 start.go:83] releasing machines lock for "functional-622000", held for 877.108871ms
	I0819 10:02:48.016022    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016177    3149 main.go:141] libmachine: (functional-622000) Calling .GetIP
	I0819 10:02:48.016275    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016589    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016695    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016767    3149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:02:48.016795    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:48.016806    3149 ssh_runner.go:195] Run: cat /version.json
	I0819 10:02:48.016817    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:48.016882    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:48.016971    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:48.016990    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:48.017080    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:48.017101    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:48.017164    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:48.017193    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:48.017328    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:48.049603    3149 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 10:02:48.049829    3149 ssh_runner.go:195] Run: systemctl --version
	I0819 10:02:48.095984    3149 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 10:02:48.096931    3149 command_runner.go:130] > systemd 252 (252)
	I0819 10:02:48.096961    3149 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 10:02:48.097053    3149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:02:48.102122    3149 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 10:02:48.102143    3149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:02:48.102177    3149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:02:48.110952    3149 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 10:02:48.110963    3149 start.go:495] detecting cgroup driver to use...
	I0819 10:02:48.111059    3149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:02:48.126457    3149 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0819 10:02:48.126734    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:02:48.135958    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:02:48.145231    3149 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:02:48.145276    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:02:48.154341    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:02:48.163160    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:02:48.171882    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:02:48.181115    3149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:02:48.190524    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:02:48.200851    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:02:48.209942    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:02:48.219031    3149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:02:48.227175    3149 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 10:02:48.227346    3149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:02:48.235625    3149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:02:48.388843    3149 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:02:48.408053    3149 start.go:495] detecting cgroup driver to use...
	I0819 10:02:48.408141    3149 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:02:48.422240    3149 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0819 10:02:48.422854    3149 command_runner.go:130] > [Unit]
	I0819 10:02:48.422864    3149 command_runner.go:130] > Description=Docker Application Container Engine
	I0819 10:02:48.422868    3149 command_runner.go:130] > Documentation=https://docs.docker.com
	I0819 10:02:48.422873    3149 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0819 10:02:48.422878    3149 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0819 10:02:48.422882    3149 command_runner.go:130] > StartLimitBurst=3
	I0819 10:02:48.422886    3149 command_runner.go:130] > StartLimitIntervalSec=60
	I0819 10:02:48.422890    3149 command_runner.go:130] > [Service]
	I0819 10:02:48.422896    3149 command_runner.go:130] > Type=notify
	I0819 10:02:48.422900    3149 command_runner.go:130] > Restart=on-failure
	I0819 10:02:48.422906    3149 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0819 10:02:48.422914    3149 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0819 10:02:48.422920    3149 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0819 10:02:48.422926    3149 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0819 10:02:48.422932    3149 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0819 10:02:48.422942    3149 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0819 10:02:48.422948    3149 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0819 10:02:48.422956    3149 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0819 10:02:48.422962    3149 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0819 10:02:48.422966    3149 command_runner.go:130] > ExecStart=
	I0819 10:02:48.422983    3149 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0819 10:02:48.422987    3149 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0819 10:02:48.422994    3149 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0819 10:02:48.423000    3149 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0819 10:02:48.423003    3149 command_runner.go:130] > LimitNOFILE=infinity
	I0819 10:02:48.423011    3149 command_runner.go:130] > LimitNPROC=infinity
	I0819 10:02:48.423015    3149 command_runner.go:130] > LimitCORE=infinity
	I0819 10:02:48.423019    3149 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0819 10:02:48.423024    3149 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0819 10:02:48.423027    3149 command_runner.go:130] > TasksMax=infinity
	I0819 10:02:48.423030    3149 command_runner.go:130] > TimeoutStartSec=0
	I0819 10:02:48.423035    3149 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0819 10:02:48.423039    3149 command_runner.go:130] > Delegate=yes
	I0819 10:02:48.423043    3149 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0819 10:02:48.423047    3149 command_runner.go:130] > KillMode=process
	I0819 10:02:48.423050    3149 command_runner.go:130] > [Install]
	I0819 10:02:48.423059    3149 command_runner.go:130] > WantedBy=multi-user.target
	I0819 10:02:48.423191    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:02:48.438160    3149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:02:48.458938    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:02:48.471298    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:02:48.481842    3149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:02:48.498207    3149 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0819 10:02:48.498560    3149 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:02:48.501580    3149 command_runner.go:130] > /usr/bin/cri-dockerd
	I0819 10:02:48.501729    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:02:48.508831    3149 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:02:48.522701    3149 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:02:48.665555    3149 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:02:48.815200    3149 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:02:48.815277    3149 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:02:48.832404    3149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:02:48.960435    3149 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:04:00.136198    3149 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0819 10:04:00.136213    3149 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0819 10:04:00.136223    3149 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.17566847s)
	I0819 10:04:00.136284    3149 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:04:00.148256    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.148298    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.179943585Z" level=info msg="Starting up"
	I0819 10:04:00.148306    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.180942482Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.148320    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.181508233Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=529
	I0819 10:04:00.148330    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.197101767Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.148340    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212309114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.148351    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212331640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.148359    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212367467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.148370    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212377477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148381    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212427828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148392    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212459845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148418    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212614080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148438    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212648283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148455    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212660789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148466    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212668790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148479    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212725662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148490    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212870308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148504    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214380176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148513    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214415646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148540    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214516813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148550    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214549580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.148560    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214611309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.148568    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214671792Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.148578    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216534676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.148586    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216610115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.148595    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216626522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.148604    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216638444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.148612    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216647918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.148621    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216733763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.148630    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216945239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.148638    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217040348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.148647    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217073947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.148656    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217084934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.148672    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217096633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148682    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217105205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148691    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217112660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148700    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217121182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148709    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217136065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148720    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217146862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148729    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217154975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148811    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217162140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148823    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217174944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148831    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217184058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148840    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217193346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148849    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217205266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148858    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217214712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148867    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217222710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148876    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217230703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148884    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217238674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148893    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217246762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148902    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217255635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148911    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217263095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148920    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217270770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148928    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217278425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148942    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217287600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.148951    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217301045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148959    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217309187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148968    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217316720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.148977    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217362662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.148989    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217376693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.148999    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217384264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.149127    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217392026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.149138    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217398807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149151    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217406542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.149159    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217413058Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.149168    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217541797Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.149175    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217596199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.149183    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217626417Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.149191    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217704249Z" level=info msg="containerd successfully booted in 0.021235s"
	I0819 10:04:00.149204    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.213638513Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.149212    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.218697243Z" level=info msg="Loading containers: start."
	I0819 10:04:00.149230    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.303833103Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.149242    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.394776557Z" level=info msg="Loading containers: done."
	I0819 10:04:00.149252    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.401999290Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.149259    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.402083612Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.149267    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430356737Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.149273    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.149280    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430518481Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.149286    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.592352095Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.149293    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593517361Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.149302    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593620938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.149310    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593657991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.149320    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.594083691Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0819 10:04:00.149325    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.149331    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.149336    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.149341    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.149347    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.633757457Z" level=info msg="Starting up"
	I0819 10:04:00.149464    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634184054Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.149477    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634821921Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=873
	I0819 10:04:00.149486    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.653253192Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.149496    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670539137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.149505    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670588711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.149514    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670618159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.149523    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670627892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149534    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670647557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149546    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670655607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149561    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670761247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149571    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670822043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149582    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670833696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149592    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670840772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149601    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670856847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149610    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670937210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149624    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672479320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149633    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672517250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149656    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672598536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149665    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672608718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.149674    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672627499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.149682    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672639411Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.149690    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672775631Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.149699    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672821269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.149713    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672833738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.149723    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672843249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.149732    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672853396Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.149740    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672882179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.149753    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673016560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.149761    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673078296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.149771    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673089866Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.149780    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673100402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.149790    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673108857Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149799    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673116983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149808    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673124628Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149817    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673133352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149830    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673141618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149840    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673150296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149848    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673158127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149857    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673165754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149938    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149950    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673407110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149959    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673425300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149968    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673438713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149976    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673449750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149986    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673459416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149994    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673470226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150003    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673482043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150018    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673493250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150027    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673506067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150035    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673516910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150044    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673527469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150053    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673573561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150061    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673591400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.150074    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673631719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150083    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673719578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150092    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150101    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673789779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.150113    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673825158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.150122    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673835448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150133    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673846514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.150146    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673856283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150264    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673868043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.150275    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673875479Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.150284    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674416665Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.150292    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674488718Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.150300    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674551662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.150307    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674591532Z" level=info msg="containerd successfully booted in 0.021887s"
	I0819 10:04:00.150315    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.701018022Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.150322    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.703929003Z" level=info msg="Loading containers: start."
	I0819 10:04:00.150338    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.774231260Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.150349    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.832584697Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0819 10:04:00.150362    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.874250689Z" level=info msg="Loading containers: done."
	I0819 10:04:00.150374    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884709929Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.150382    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884767272Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.150389    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907293087Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.150396    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907348774Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.150402    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.150412    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.150420    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.019481735Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.150429    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020418313Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0819 10:04:00.150437    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020517778Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.150446    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020639216Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.150455    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020676616Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.150461    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.150467    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.150473    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.150480    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.052721036Z" level=info msg="Starting up"
	I0819 10:04:00.150599    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.053665999Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.150613    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.054204471Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1227
	I0819 10:04:00.150627    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.071110001Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.150637    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086417619Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.150645    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086519393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150655    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086575826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.150664    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086609098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150675    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086649285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150684    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086679999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150700    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086800826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150710    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086837952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150721    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086867954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150730    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086894854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150739    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086930771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150748    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.087026239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150763    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088598589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150772    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088650891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150786    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088784035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150795    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088826554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.150805    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088863800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.150813    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088900283Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.150821    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089048412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.150830    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089096938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.150839    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089133463Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.150849    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089178884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.150858    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089213509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.150866    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089263884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.150875    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089475204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.150883    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089597981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.150892    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089639022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.150902    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089670206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.150912    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089699866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150921    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089728982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150930    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089757898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150939    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089787686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150948    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089821007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150958    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089859340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150969    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089892427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150982    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089920146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.151044    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089960280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151058    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089995294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151067    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090025807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151076    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090055021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151085    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090088517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151095    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090119075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151104    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090147596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151113    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151122    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090215944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151130    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090256138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151139    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090288110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151148    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090316417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151156    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090344756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151164    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090386745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.151173    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090425469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151182    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090489354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151191    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090525304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.151200    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090598037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.151215    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090641245Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.151225    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090672551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.151238    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090701383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.151350    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090729639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151361    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090758285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.151380    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090785175Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.151390    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090962205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.151398    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091049960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.151406    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091113179Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.151414    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091149051Z" level=info msg="containerd successfully booted in 0.020375s"
	I0819 10:04:00.151422    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.080403371Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.151429    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.185866595Z" level=info msg="Loading containers: start."
	I0819 10:04:00.151445    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.255656572Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.151456    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.313204760Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0819 10:04:00.151464    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.358744224Z" level=info msg="Loading containers: done."
	I0819 10:04:00.151474    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365948882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.151483    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365999910Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.151496    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384916152Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.151504    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384992962Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.151510    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.151519    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237378813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151531    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237442064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151541    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237454926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151551    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237547247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151563    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240823938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151616    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240944115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151631    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240972248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151641    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.241074980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151653    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251431426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151663    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251590345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151673    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251601329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151683    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251683938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151693    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253924695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151704    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253986191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151714    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253999192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151724    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.254059512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151734    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444251009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151744    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444317593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151754    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444336465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151767    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444427584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151777    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458785591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151787    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458823990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151805    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151815    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458891334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151865    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477642840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151878    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477748278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151887    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477759630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151896    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477819081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151908    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480734366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151918    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480804224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151928    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480826831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151938    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480950777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151948    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561746494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151962    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561814928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151972    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561824738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151982    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561890303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151993    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765174254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152004    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765250994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152013    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765324828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152023    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765477954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152035    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798811898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152045    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798944640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152055    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798957582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152134    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.799103034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152147    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881637043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152158    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881920803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152170    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882025155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152180    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152190    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402231252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152200    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402303190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152214    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402316565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152224    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402385693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152234    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418387475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152244    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418603733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152254    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418627856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152263    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418851110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152273    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907392815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152283    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907863518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152297    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908056887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152307    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152317    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989553144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152327    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989622168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152413    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989632381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152425    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.992038509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152439    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.526555515Z" level=info msg="ignoring event" container=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152449    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527066255Z" level=info msg="shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	I0819 10:04:00.152459    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527315561Z" level=warning msg="cleaning up after shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	I0819 10:04:00.152467    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527360670Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152479    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.607857375Z" level=info msg="ignoring event" container=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152493    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608302054Z" level=info msg="shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	I0819 10:04:00.152503    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608658326Z" level=warning msg="cleaning up after shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	I0819 10:04:00.152514    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608740170Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152521    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.158148283Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.152532    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.152543    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268535097Z" level=info msg="shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	I0819 10:04:00.152555    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.268717864Z" level=info msg="ignoring event" container=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152567    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268586609Z" level=warning msg="cleaning up after shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	I0819 10:04:00.152575    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268964831Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152590    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.273347289Z" level=info msg="ignoring event" container=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152599    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.273955655Z" level=info msg="shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	I0819 10:04:00.152609    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274023465Z" level=warning msg="cleaning up after shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	I0819 10:04:00.152617    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274033869Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152761    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290067625Z" level=info msg="ignoring event" container=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152775    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290112205Z" level=info msg="ignoring event" container=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152785    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290424043Z" level=info msg="shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	I0819 10:04:00.152800    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290536979Z" level=warning msg="cleaning up after shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	I0819 10:04:00.152808    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290582368Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152817    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290465882Z" level=info msg="shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	I0819 10:04:00.152828    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290733155Z" level=warning msg="cleaning up after shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	I0819 10:04:00.152836    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290741439Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152847    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291499508Z" level=info msg="ignoring event" container=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152858    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291535224Z" level=info msg="ignoring event" container=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152866    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290595808Z" level=info msg="shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	I0819 10:04:00.152876    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297716002Z" level=warning msg="cleaning up after shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	I0819 10:04:00.152883    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297725076Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152895    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297983983Z" level=info msg="shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	I0819 10:04:00.152904    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298045597Z" level=warning msg="cleaning up after shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	I0819 10:04:00.152912    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298148865Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152925    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302154900Z" level=info msg="ignoring event" container=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152937    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302226976Z" level=info msg="ignoring event" container=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152946    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302717446Z" level=info msg="shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	I0819 10:04:00.152957    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302759085Z" level=warning msg="cleaning up after shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	I0819 10:04:00.152965    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302767629Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152974    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308068913Z" level=info msg="shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	I0819 10:04:00.152984    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308118671Z" level=warning msg="cleaning up after shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	I0819 10:04:00.152996    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308328329Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153006    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311243798Z" level=info msg="shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	I0819 10:04:00.153016    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311327236Z" level=warning msg="cleaning up after shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	I0819 10:04:00.153024    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311335697Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153042    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316752567Z" level=info msg="ignoring event" container=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153053    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316800043Z" level=info msg="ignoring event" container=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153069    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316819263Z" level=info msg="ignoring event" container=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153079    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317249898Z" level=info msg="shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	I0819 10:04:00.153093    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317359801Z" level=warning msg="cleaning up after shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	I0819 10:04:00.153106    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317369184Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153116    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321910919Z" level=info msg="shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	I0819 10:04:00.153126    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321963437Z" level=warning msg="cleaning up after shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	I0819 10:04:00.153134    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321972279Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153147    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.343145333Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153159    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.343891870Z" level=info msg="ignoring event" container=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153175    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.344047521Z" level=info msg="shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	I0819 10:04:00.153190    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345641889Z" level=warning msg="cleaning up after shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	I0819 10:04:00.153200    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345650213Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153213    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.353197511Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153227    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.354463589Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153243    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.366627155Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153256    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.401735781Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153269    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1220]: time="2024-08-19T17:02:54.221061363Z" level=info msg="ignoring event" container=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153279    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221240161Z" level=info msg="shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	I0819 10:04:00.153290    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221269867Z" level=warning msg="cleaning up after shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	I0819 10:04:00.153297    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221276283Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153312    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.230654326Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd
	I0819 10:04:00.153323    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.274755484Z" level=info msg="ignoring event" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153334    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275154472Z" level=info msg="shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	I0819 10:04:00.153345    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275772857Z" level=warning msg="cleaning up after shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	I0819 10:04:00.153361    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275815643Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153372    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.299808564Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0819 10:04:00.153379    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300197939Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.153414    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300259721Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.153426    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300281777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.153433    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.153439    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.153445    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Consumed 2.502s CPU time.
	I0819 10:04:00.153454    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.153461    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 dockerd[3529]: time="2024-08-19T17:03:00.342173492Z" level=info msg="Starting up"
	I0819 10:04:00.153471    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 dockerd[3529]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0819 10:04:00.153480    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0819 10:04:00.153486    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0819 10:04:00.153492    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	I0819 10:04:00.188229    3149 out.go:201] 
	W0819 10:04:00.209936    3149 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:01:44 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.179943585Z" level=info msg="Starting up"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.180942482Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.181508233Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=529
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.197101767Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212309114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212331640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212367467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212377477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212427828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212459845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212614080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212648283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212660789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212668790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212725662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212870308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214380176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214415646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214516813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214549580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214611309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214671792Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216534676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216610115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216626522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216638444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216647918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216733763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216945239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217040348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217073947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217084934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217096633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217105205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217112660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217121182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217136065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217146862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217154975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217162140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217174944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217184058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217193346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217205266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217214712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217222710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217230703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217238674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217246762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217255635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217263095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217270770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217278425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217287600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217301045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217309187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217316720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217362662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217376693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217384264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217392026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217398807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217406542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217413058Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217541797Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217596199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217626417Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217704249Z" level=info msg="containerd successfully booted in 0.021235s"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.213638513Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.218697243Z" level=info msg="Loading containers: start."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.303833103Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.394776557Z" level=info msg="Loading containers: done."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.401999290Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.402083612Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430356737Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:45 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430518481Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.592352095Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593517361Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593620938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593657991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.594083691Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 19 17:01:46 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:47 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:47 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.633757457Z" level=info msg="Starting up"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634184054Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634821921Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=873
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.653253192Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670539137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670588711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670618159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670627892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670647557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670655607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670761247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670822043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670833696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670840772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670856847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670937210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672479320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672517250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672598536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672608718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672627499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672639411Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672775631Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672821269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672833738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672843249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672853396Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672882179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673016560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673078296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673089866Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673100402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673108857Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673116983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673124628Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673133352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673141618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673150296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673158127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673165754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673407110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673425300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673438713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673449750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673459416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673470226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673482043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673493250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673506067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673516910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673527469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673573561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673591400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673631719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673719578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673789779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673825158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673835448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673846514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673856283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673868043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673875479Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674416665Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674488718Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674551662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674591532Z" level=info msg="containerd successfully booted in 0.021887s"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.701018022Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.703929003Z" level=info msg="Loading containers: start."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.774231260Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.832584697Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.874250689Z" level=info msg="Loading containers: done."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884709929Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884767272Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907293087Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907348774Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:48 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:53 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.019481735Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020418313Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020517778Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020639216Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020676616Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:54 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:54 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:54 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.052721036Z" level=info msg="Starting up"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.053665999Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.054204471Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1227
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.071110001Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086417619Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086519393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086575826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086609098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086649285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086679999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086800826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086837952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086867954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086894854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086930771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.087026239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088598589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088650891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088784035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088826554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088863800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088900283Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089048412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089096938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089133463Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089178884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089213509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089263884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089475204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089597981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089639022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089670206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089699866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089728982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089757898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089787686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089821007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089859340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089892427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089920146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089960280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089995294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090025807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090055021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090088517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090119075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090147596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090215944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090256138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090288110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090316417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090344756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090386745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090425469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090489354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090525304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090598037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090641245Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090672551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090701383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090729639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090758285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090785175Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090962205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091049960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091113179Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091149051Z" level=info msg="containerd successfully booted in 0.020375s"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.080403371Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.185866595Z" level=info msg="Loading containers: start."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.255656572Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.313204760Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.358744224Z" level=info msg="Loading containers: done."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365948882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365999910Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384916152Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384992962Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:55 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237378813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237442064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237454926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237547247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240823938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240944115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240972248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.241074980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251431426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251590345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251601329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251683938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253924695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253986191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253999192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.254059512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444251009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444317593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444336465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444427584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458785591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458823990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458891334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477642840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477748278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477759630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477819081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480734366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480804224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480826831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480950777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561746494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561814928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561824738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561890303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765174254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765250994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765324828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765477954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798811898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798944640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798957582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.799103034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881637043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881920803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882025155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402231252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402303190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402316565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402385693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418387475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418603733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418627856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418851110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907392815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907863518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908056887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989553144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989622168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989632381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.992038509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.526555515Z" level=info msg="ignoring event" container=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527066255Z" level=info msg="shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527315561Z" level=warning msg="cleaning up after shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527360670Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.607857375Z" level=info msg="ignoring event" container=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608302054Z" level=info msg="shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608658326Z" level=warning msg="cleaning up after shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608740170Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.158148283Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:02:49 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268535097Z" level=info msg="shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.268717864Z" level=info msg="ignoring event" container=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268586609Z" level=warning msg="cleaning up after shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268964831Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.273347289Z" level=info msg="ignoring event" container=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.273955655Z" level=info msg="shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274023465Z" level=warning msg="cleaning up after shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274033869Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290067625Z" level=info msg="ignoring event" container=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290112205Z" level=info msg="ignoring event" container=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290424043Z" level=info msg="shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290536979Z" level=warning msg="cleaning up after shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290582368Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290465882Z" level=info msg="shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290733155Z" level=warning msg="cleaning up after shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290741439Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291499508Z" level=info msg="ignoring event" container=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291535224Z" level=info msg="ignoring event" container=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290595808Z" level=info msg="shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297716002Z" level=warning msg="cleaning up after shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297725076Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297983983Z" level=info msg="shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298045597Z" level=warning msg="cleaning up after shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298148865Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302154900Z" level=info msg="ignoring event" container=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302226976Z" level=info msg="ignoring event" container=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302717446Z" level=info msg="shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302759085Z" level=warning msg="cleaning up after shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302767629Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308068913Z" level=info msg="shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308118671Z" level=warning msg="cleaning up after shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308328329Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311243798Z" level=info msg="shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311327236Z" level=warning msg="cleaning up after shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311335697Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316752567Z" level=info msg="ignoring event" container=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316800043Z" level=info msg="ignoring event" container=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316819263Z" level=info msg="ignoring event" container=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317249898Z" level=info msg="shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317359801Z" level=warning msg="cleaning up after shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317369184Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321910919Z" level=info msg="shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321963437Z" level=warning msg="cleaning up after shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321972279Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.343145333Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.343891870Z" level=info msg="ignoring event" container=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.344047521Z" level=info msg="shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345641889Z" level=warning msg="cleaning up after shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345650213Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.353197511Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.354463589Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.366627155Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.401735781Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1220]: time="2024-08-19T17:02:54.221061363Z" level=info msg="ignoring event" container=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221240161Z" level=info msg="shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221269867Z" level=warning msg="cleaning up after shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221276283Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.230654326Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.274755484Z" level=info msg="ignoring event" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275154472Z" level=info msg="shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275772857Z" level=warning msg="cleaning up after shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275815643Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.299808564Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300197939Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300259721Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300281777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:03:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Consumed 2.502s CPU time.
	Aug 19 17:03:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:03:00 functional-622000 dockerd[3529]: time="2024-08-19T17:03:00.342173492Z" level=info msg="Starting up"
	Aug 19 17:04:00 functional-622000 dockerd[3529]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:04:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:04:00.210429    3149 out.go:270] * 
	W0819 10:04:00.211654    3149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:04:00.274709    3149 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:20:03 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:20:03 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:20:03 functional-622000 dockerd[7525]: time="2024-08-19T17:20:03.605340359Z" level=info msg="Starting up"
	Aug 19 17:21:03 functional-622000 dockerd[7525]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="error getting RW layer size for container ID '9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5'"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="error getting RW layer size for container ID '5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547'"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="error getting RW layer size for container ID 'c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90'"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="error getting RW layer size for container ID '6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5'"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="error getting RW layer size for container ID 'ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd'"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="error getting RW layer size for container ID 'be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa'"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="error getting RW layer size for container ID 'd997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:21:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:21:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824'"
	Aug 19 17:21:03 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:21:03 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:21:03 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 19 17:21:03 functional-622000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Aug 19 17:21:03 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:21:03 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-19T17:21:05Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.061352] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +2.454350] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +0.095628] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.097890] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +0.135254] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +3.642141] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.053482] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.547324] systemd-fstab-generator[1462]: Ignoring "noauto" option for root device
	[  +3.456953] systemd-fstab-generator[1592]: Ignoring "noauto" option for root device
	[  +0.049385] kauditd_printk_skb: 70 callbacks suppressed
	[Aug19 17:02] systemd-fstab-generator[1997]: Ignoring "noauto" option for root device
	[  +0.071304] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.815922] systemd-fstab-generator[2131]: Ignoring "noauto" option for root device
	[  +0.113741] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.001342] kauditd_printk_skb: 98 callbacks suppressed
	[ +26.946888] systemd-fstab-generator[3048]: Ignoring "noauto" option for root device
	[  +0.280843] systemd-fstab-generator[3084]: Ignoring "noauto" option for root device
	[  +0.156587] systemd-fstab-generator[3096]: Ignoring "noauto" option for root device
	[  +0.148300] systemd-fstab-generator[3110]: Ignoring "noauto" option for root device
	[  +5.168584] kauditd_printk_skb: 91 callbacks suppressed
	[Aug19 17:10] clocksource: timekeeping watchdog on CPU1: Marking clocksource 'tsc' as unstable because the skew is too large:
	[  +0.000086] clocksource:                       'hpet' wd_now: 49814ab6 wd_last: 48eef9da mask: ffffffff
	[  +0.000045] clocksource:                       'tsc' cs_now: 70667105109 cs_last: 705b0d6509b mask: ffffffffffffffff
	[  +0.000180] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
	[  +0.001515] clocksource: Checking clocksource tsc synchronization from CPU 1.
	
	
	==> kernel <==
	 17:22:04 up 20 min,  0 users,  load average: 0.00, 0.00, 0.00
	Linux functional-622000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 17:22:03 functional-622000 kubelet[2004]: I0819 17:22:03.845536    2004 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.845710    2004 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.845862    2004 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.846006    2004 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.846333    2004 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.846598    2004 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.847316    2004 kubelet.go:2911] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.847526    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.847830    2004 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.847931    2004 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:22:03 functional-622000 kubelet[2004]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:22:03 functional-622000 kubelet[2004]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:22:03 functional-622000 kubelet[2004]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:22:03 functional-622000 kubelet[2004]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.848111    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.848149    2004 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.849558    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.849742    2004 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.850140    2004 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.851363    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:22:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:22:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:22:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:22:03Z\\\",\\\"lastTransitionTime\\\":\\\"2024-08-19T17:22:03Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 19m16.275246045s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to
get docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/var/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-622000\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000/status?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.852599    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.853654    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.854630    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.855618    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:22:03 functional-622000 kubelet[2004]: E0819 17:22:03.855658    2004 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 10:21:03.522046    3812 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:21:03.537703    3812 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:21:03.554084    3812 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:21:03.568902    3812 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:21:03.583937    3812 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:21:03.597626    3812 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:21:03.610435    3812 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:21:03.624286    3812 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-622000 -n functional-622000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-622000 -n functional-622000: exit status 2 (157.018286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-622000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (120.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (120.35s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-622000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-622000 get pods: exit status 1 (2.041415125s)

                                                
                                                
** stderr ** 
	E0819 10:22:06.114562    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	E0819 10:22:06.216579    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	E0819 10:22:06.318670    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	E0819 10:22:06.420829    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	E0819 10:22:06.523427    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.169.0.4:8441/api?timeout=32s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	The connection to the server 192.169.0.4:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-622000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-622000 -n functional-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-622000 -n functional-622000: exit status 2 (1.584761512s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 logs -n 25: (1m56.511635617s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                              Args                              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | pause                                                          |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 09:58 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 09:58 PDT | 19 Aug 24 10:00 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-492000 --log_dir                                        | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 10:00 PDT | 19 Aug 24 10:01 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| delete  | -p nospam-492000                                               | nospam-492000     | jenkins | v1.33.1 | 19 Aug 24 10:01 PDT | 19 Aug 24 10:01 PDT |
	| start   | -p functional-622000                                           | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:01 PDT | 19 Aug 24 10:02 PDT |
	|         | --memory=4000                                                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                          |                   |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit                                   |                   |         |         |                     |                     |
	| start   | -p functional-622000                                           | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:02 PDT |                     |
	|         | --alsologtostderr -v=8                                         |                   |         |         |                     |                     |
	| cache   | functional-622000 cache add                                    | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:08 PDT | 19 Aug 24 10:10 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | functional-622000 cache add                                    | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:10 PDT | 19 Aug 24 10:12 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | functional-622000 cache add                                    | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:12 PDT | 19 Aug 24 10:14 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-622000 cache add                                    | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:14 PDT | 19 Aug 24 10:15 PDT |
	|         | minikube-local-cache-test:functional-622000                    |                   |         |         |                     |                     |
	| cache   | functional-622000 cache delete                                 | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:15 PDT | 19 Aug 24 10:15 PDT |
	|         | minikube-local-cache-test:functional-622000                    |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 19 Aug 24 10:15 PDT | 19 Aug 24 10:15 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | list                                                           | minikube          | jenkins | v1.33.1 | 19 Aug 24 10:15 PDT | 19 Aug 24 10:15 PDT |
	| ssh     | functional-622000 ssh sudo                                     | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:15 PDT |                     |
	|         | crictl images                                                  |                   |         |         |                     |                     |
	| ssh     | functional-622000                                              | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:15 PDT |                     |
	|         | ssh sudo docker rmi                                            |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| ssh     | functional-622000 ssh                                          | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:16 PDT |                     |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-622000 cache reload                                 | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:16 PDT | 19 Aug 24 10:18 PDT |
	| ssh     | functional-622000 ssh                                          | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:18 PDT |                     |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 19 Aug 24 10:18 PDT | 19 Aug 24 10:18 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.33.1 | 19 Aug 24 10:18 PDT | 19 Aug 24 10:18 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| kubectl | functional-622000 kubectl --                                   | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:20 PDT |                     |
	|         | --context functional-622000                                    |                   |         |         |                     |                     |
	|         | get pods                                                       |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:02:46
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:02:46.715279    3149 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:02:46.715467    3149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:02:46.715473    3149 out.go:358] Setting ErrFile to fd 2...
	I0819 10:02:46.715476    3149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:02:46.715649    3149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:02:46.717106    3149 out.go:352] Setting JSON to false
	I0819 10:02:46.739543    3149 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1936,"bootTime":1724085030,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:02:46.739637    3149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:02:46.761631    3149 out.go:177] * [functional-622000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:02:46.804362    3149 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:02:46.804421    3149 notify.go:220] Checking for updates...
	I0819 10:02:46.847125    3149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:02:46.868395    3149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:02:46.889188    3149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:02:46.931247    3149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:02:46.952016    3149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:02:46.974016    3149 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:02:46.974175    3149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:02:46.974828    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:46.974917    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:02:46.984546    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50192
	I0819 10:02:46.984906    3149 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:02:46.985340    3149 main.go:141] libmachine: Using API Version  1
	I0819 10:02:46.985351    3149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:02:46.985609    3149 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:02:46.985745    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.014206    3149 out.go:177] * Using the hyperkit driver based on existing profile
	I0819 10:02:47.056388    3149 start.go:297] selected driver: hyperkit
	I0819 10:02:47.056417    3149 start.go:901] validating driver "hyperkit" against &{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:02:47.056645    3149 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:02:47.056829    3149 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:02:47.057043    3149 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:02:47.066748    3149 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:02:47.070635    3149 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:47.070656    3149 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:02:47.073332    3149 cni.go:84] Creating CNI manager for ""
	I0819 10:02:47.073357    3149 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 10:02:47.073438    3149 start.go:340] cluster config:
	{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:02:47.073535    3149 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:02:47.116046    3149 out.go:177] * Starting "functional-622000" primary control-plane node in "functional-622000" cluster
	I0819 10:02:47.137321    3149 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:02:47.137398    3149 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:02:47.137437    3149 cache.go:56] Caching tarball of preloaded images
	I0819 10:02:47.137630    3149 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:02:47.137652    3149 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:02:47.137794    3149 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/config.json ...
	I0819 10:02:47.138761    3149 start.go:360] acquireMachinesLock for functional-622000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:02:47.138881    3149 start.go:364] duration metric: took 95.93µs to acquireMachinesLock for "functional-622000"
	I0819 10:02:47.138927    3149 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:02:47.138944    3149 fix.go:54] fixHost starting: 
	I0819 10:02:47.139354    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:02:47.139383    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:02:47.148422    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50194
	I0819 10:02:47.148784    3149 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:02:47.149127    3149 main.go:141] libmachine: Using API Version  1
	I0819 10:02:47.149154    3149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:02:47.149416    3149 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:02:47.149542    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.149650    3149 main.go:141] libmachine: (functional-622000) Calling .GetState
	I0819 10:02:47.149730    3149 main.go:141] libmachine: (functional-622000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:02:47.149822    3149 main.go:141] libmachine: (functional-622000) DBG | hyperkit pid from json: 3102
	I0819 10:02:47.150790    3149 fix.go:112] recreateIfNeeded on functional-622000: state=Running err=<nil>
	W0819 10:02:47.150805    3149 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:02:47.172224    3149 out.go:177] * Updating the running hyperkit "functional-622000" VM ...
	I0819 10:02:47.193060    3149 machine.go:93] provisionDockerMachine start ...
	I0819 10:02:47.193093    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.193438    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.193671    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.193895    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.194183    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.194389    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.194647    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.194938    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.194949    3149 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:02:47.257006    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622000
	
	I0819 10:02:47.257020    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.257151    3149 buildroot.go:166] provisioning hostname "functional-622000"
	I0819 10:02:47.257163    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.257264    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.257362    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.257459    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.257534    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.257627    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.257768    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.257923    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.257933    3149 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-622000 && echo "functional-622000" | sudo tee /etc/hostname
	I0819 10:02:47.330881    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-622000
	
	I0819 10:02:47.330901    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.331043    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.331162    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.331251    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.331340    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.331465    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.331608    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.331620    3149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-622000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-622000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-622000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:02:47.392695    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:02:47.392714    3149 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:02:47.392730    3149 buildroot.go:174] setting up certificates
	I0819 10:02:47.392736    3149 provision.go:84] configureAuth start
	I0819 10:02:47.392747    3149 main.go:141] libmachine: (functional-622000) Calling .GetMachineName
	I0819 10:02:47.392879    3149 main.go:141] libmachine: (functional-622000) Calling .GetIP
	I0819 10:02:47.392977    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.393055    3149 provision.go:143] copyHostCerts
	I0819 10:02:47.393086    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:02:47.393160    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:02:47.393169    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:02:47.393370    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:02:47.393581    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:02:47.393621    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:02:47.393626    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:02:47.393737    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:02:47.393914    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:02:47.393957    3149 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:02:47.393962    3149 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:02:47.394039    3149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:02:47.394180    3149 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.functional-622000 san=[127.0.0.1 192.169.0.4 functional-622000 localhost minikube]
	I0819 10:02:47.551861    3149 provision.go:177] copyRemoteCerts
	I0819 10:02:47.551924    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:02:47.551939    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.552077    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.552163    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.552249    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.552354    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:47.590340    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:02:47.590426    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:02:47.611171    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:02:47.611243    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 10:02:47.631670    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:02:47.631735    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:02:47.651195    3149 provision.go:87] duration metric: took 258.447258ms to configureAuth
	I0819 10:02:47.651207    3149 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:02:47.651340    3149 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:02:47.651354    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.651503    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.651612    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.651695    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.651787    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.651883    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.652007    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.652132    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.652140    3149 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:02:47.713196    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:02:47.713207    3149 buildroot.go:70] root file system type: tmpfs
	I0819 10:02:47.713274    3149 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:02:47.713289    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.713416    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.713502    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.713589    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.713668    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.713818    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.713957    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.714002    3149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:02:47.788841    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:02:47.788868    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.789014    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.789110    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.789218    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.789323    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.789459    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.789600    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.789615    3149 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:02:47.859208    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:02:47.859221    3149 machine.go:96] duration metric: took 666.140503ms to provisionDockerMachine
	I0819 10:02:47.859235    3149 start.go:293] postStartSetup for "functional-622000" (driver="hyperkit")
	I0819 10:02:47.859243    3149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:02:47.859253    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:47.859433    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:02:47.859447    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.859550    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.859628    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.859723    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.859805    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:47.897960    3149 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:02:47.900903    3149 command_runner.go:130] > NAME=Buildroot
	I0819 10:02:47.900911    3149 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 10:02:47.900915    3149 command_runner.go:130] > ID=buildroot
	I0819 10:02:47.900919    3149 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 10:02:47.900923    3149 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 10:02:47.901013    3149 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:02:47.901024    3149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:02:47.901125    3149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:02:47.901317    3149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:02:47.901324    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:02:47.901516    3149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts -> hosts in /etc/test/nested/copy/2174
	I0819 10:02:47.901521    3149 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts -> /etc/test/nested/copy/2174/hosts
	I0819 10:02:47.901573    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2174
	I0819 10:02:47.908902    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:02:47.928770    3149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts --> /etc/test/nested/copy/2174/hosts (40 bytes)
	I0819 10:02:47.949590    3149 start.go:296] duration metric: took 90.345683ms for postStartSetup
	I0819 10:02:47.949608    3149 fix.go:56] duration metric: took 810.670757ms for fixHost
	I0819 10:02:47.949626    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:47.949765    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:47.949853    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.949932    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:47.950014    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:47.950145    3149 main.go:141] libmachine: Using SSH client type: native
	I0819 10:02:47.950278    3149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1899ea0] 0x189cc00 <nil>  [] 0s} 192.169.0.4 22 <nil> <nil>}
	I0819 10:02:47.950285    3149 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:02:48.015962    3149 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724086968.201080300
	
	I0819 10:02:48.015973    3149 fix.go:216] guest clock: 1724086968.201080300
	I0819 10:02:48.015979    3149 fix.go:229] Guest: 2024-08-19 10:02:48.2010803 -0700 PDT Remote: 2024-08-19 10:02:47.949616 -0700 PDT m=+1.269337789 (delta=251.4643ms)
	I0819 10:02:48.015999    3149 fix.go:200] guest clock delta is within tolerance: 251.4643ms
	I0819 10:02:48.016003    3149 start.go:83] releasing machines lock for "functional-622000", held for 877.108871ms
	I0819 10:02:48.016022    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016177    3149 main.go:141] libmachine: (functional-622000) Calling .GetIP
	I0819 10:02:48.016275    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016589    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016695    3149 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:02:48.016767    3149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:02:48.016795    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:48.016806    3149 ssh_runner.go:195] Run: cat /version.json
	I0819 10:02:48.016817    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
	I0819 10:02:48.016882    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:48.016971    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:48.016990    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
	I0819 10:02:48.017080    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:48.017101    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
	I0819 10:02:48.017164    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:48.017193    3149 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
	I0819 10:02:48.017328    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
	I0819 10:02:48.049603    3149 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 10:02:48.049829    3149 ssh_runner.go:195] Run: systemctl --version
	I0819 10:02:48.095984    3149 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 10:02:48.096931    3149 command_runner.go:130] > systemd 252 (252)
	I0819 10:02:48.096961    3149 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 10:02:48.097053    3149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:02:48.102122    3149 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 10:02:48.102143    3149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:02:48.102177    3149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:02:48.110952    3149 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 10:02:48.110963    3149 start.go:495] detecting cgroup driver to use...
	I0819 10:02:48.111059    3149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:02:48.126457    3149 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0819 10:02:48.126734    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:02:48.135958    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:02:48.145231    3149 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:02:48.145276    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:02:48.154341    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:02:48.163160    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:02:48.171882    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:02:48.181115    3149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:02:48.190524    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:02:48.200851    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:02:48.209942    3149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:02:48.219031    3149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:02:48.227175    3149 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 10:02:48.227346    3149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:02:48.235625    3149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:02:48.388843    3149 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:02:48.408053    3149 start.go:495] detecting cgroup driver to use...
	I0819 10:02:48.408141    3149 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:02:48.422240    3149 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0819 10:02:48.422854    3149 command_runner.go:130] > [Unit]
	I0819 10:02:48.422864    3149 command_runner.go:130] > Description=Docker Application Container Engine
	I0819 10:02:48.422868    3149 command_runner.go:130] > Documentation=https://docs.docker.com
	I0819 10:02:48.422873    3149 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0819 10:02:48.422878    3149 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0819 10:02:48.422882    3149 command_runner.go:130] > StartLimitBurst=3
	I0819 10:02:48.422886    3149 command_runner.go:130] > StartLimitIntervalSec=60
	I0819 10:02:48.422890    3149 command_runner.go:130] > [Service]
	I0819 10:02:48.422896    3149 command_runner.go:130] > Type=notify
	I0819 10:02:48.422900    3149 command_runner.go:130] > Restart=on-failure
	I0819 10:02:48.422906    3149 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0819 10:02:48.422914    3149 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0819 10:02:48.422920    3149 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0819 10:02:48.422926    3149 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0819 10:02:48.422932    3149 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0819 10:02:48.422942    3149 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0819 10:02:48.422948    3149 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0819 10:02:48.422956    3149 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0819 10:02:48.422962    3149 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0819 10:02:48.422966    3149 command_runner.go:130] > ExecStart=
	I0819 10:02:48.422983    3149 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0819 10:02:48.422987    3149 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0819 10:02:48.422994    3149 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0819 10:02:48.423000    3149 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0819 10:02:48.423003    3149 command_runner.go:130] > LimitNOFILE=infinity
	I0819 10:02:48.423011    3149 command_runner.go:130] > LimitNPROC=infinity
	I0819 10:02:48.423015    3149 command_runner.go:130] > LimitCORE=infinity
	I0819 10:02:48.423019    3149 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0819 10:02:48.423024    3149 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0819 10:02:48.423027    3149 command_runner.go:130] > TasksMax=infinity
	I0819 10:02:48.423030    3149 command_runner.go:130] > TimeoutStartSec=0
	I0819 10:02:48.423035    3149 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0819 10:02:48.423039    3149 command_runner.go:130] > Delegate=yes
	I0819 10:02:48.423043    3149 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0819 10:02:48.423047    3149 command_runner.go:130] > KillMode=process
	I0819 10:02:48.423050    3149 command_runner.go:130] > [Install]
	I0819 10:02:48.423059    3149 command_runner.go:130] > WantedBy=multi-user.target
	I0819 10:02:48.423191    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:02:48.438160    3149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:02:48.458938    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:02:48.471298    3149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:02:48.481842    3149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:02:48.498207    3149 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0819 10:02:48.498560    3149 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:02:48.501580    3149 command_runner.go:130] > /usr/bin/cri-dockerd
	I0819 10:02:48.501729    3149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:02:48.508831    3149 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:02:48.522701    3149 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:02:48.665555    3149 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:02:48.815200    3149 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:02:48.815277    3149 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:02:48.832404    3149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:02:48.960435    3149 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:04:00.136198    3149 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0819 10:04:00.136213    3149 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0819 10:04:00.136223    3149 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.17566847s)
	I0819 10:04:00.136284    3149 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:04:00.148256    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.148298    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.179943585Z" level=info msg="Starting up"
	I0819 10:04:00.148306    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.180942482Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.148320    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.181508233Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=529
	I0819 10:04:00.148330    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.197101767Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.148340    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212309114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.148351    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212331640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.148359    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212367467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.148370    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212377477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148381    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212427828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148392    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212459845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148418    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212614080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148438    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212648283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148455    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212660789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148466    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212668790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148479    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212725662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148490    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212870308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148504    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214380176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148513    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214415646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.148540    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214516813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.148550    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214549580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.148560    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214611309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.148568    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214671792Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.148578    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216534676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.148586    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216610115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.148595    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216626522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.148604    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216638444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.148612    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216647918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.148621    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216733763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.148630    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216945239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.148638    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217040348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.148647    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217073947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.148656    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217084934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.148672    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217096633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148682    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217105205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148691    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217112660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148700    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217121182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148709    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217136065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148720    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217146862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148729    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217154975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148811    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217162140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.148823    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217174944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148831    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217184058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148840    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217193346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148849    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217205266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148858    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217214712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148867    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217222710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148876    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217230703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148884    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217238674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148893    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217246762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148902    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217255635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148911    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217263095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148920    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217270770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148928    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217278425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148942    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217287600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.148951    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217301045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148959    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217309187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.148968    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217316720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.148977    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217362662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.148989    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217376693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.148999    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217384264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.149127    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217392026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.149138    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217398807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149151    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217406542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.149159    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217413058Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.149168    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217541797Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.149175    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217596199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.149183    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217626417Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.149191    3149 command_runner.go:130] > Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217704249Z" level=info msg="containerd successfully booted in 0.021235s"
	I0819 10:04:00.149204    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.213638513Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.149212    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.218697243Z" level=info msg="Loading containers: start."
	I0819 10:04:00.149230    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.303833103Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.149242    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.394776557Z" level=info msg="Loading containers: done."
	I0819 10:04:00.149252    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.401999290Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.149259    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.402083612Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.149267    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430356737Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.149273    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.149280    3149 command_runner.go:130] > Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430518481Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.149286    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.592352095Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.149293    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593517361Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.149302    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593620938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.149310    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593657991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.149320    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.594083691Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0819 10:04:00.149325    3149 command_runner.go:130] > Aug 19 17:01:46 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.149331    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.149336    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.149341    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.149347    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.633757457Z" level=info msg="Starting up"
	I0819 10:04:00.149464    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634184054Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.149477    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634821921Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=873
	I0819 10:04:00.149486    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.653253192Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.149496    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670539137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.149505    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670588711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.149514    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670618159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.149523    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670627892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149534    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670647557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149546    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670655607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149561    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670761247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149571    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670822043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149582    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670833696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149592    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670840772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149601    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670856847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149610    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670937210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149624    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672479320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149633    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672517250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.149656    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672598536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.149665    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672608718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.149674    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672627499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.149682    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672639411Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.149690    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672775631Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.149699    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672821269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.149713    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672833738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.149723    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672843249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.149732    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672853396Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.149740    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672882179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.149753    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673016560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.149761    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673078296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.149771    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673089866Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.149780    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673100402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.149790    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673108857Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149799    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673116983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149808    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673124628Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149817    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673133352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149830    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673141618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149840    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673150296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149848    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673158127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149857    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673165754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.149938    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149950    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673407110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149959    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673425300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149968    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673438713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149976    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673449750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149986    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673459416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.149994    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673470226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150003    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673482043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150018    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673493250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150027    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673506067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150035    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673516910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150044    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673527469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150053    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673573561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150061    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673591400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.150074    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673631719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150083    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673719578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150092    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150101    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673789779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.150113    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673825158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.150122    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673835448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150133    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673846514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.150146    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673856283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.150264    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673868043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.150275    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673875479Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.150284    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674416665Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.150292    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674488718Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.150300    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674551662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.150307    3149 command_runner.go:130] > Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674591532Z" level=info msg="containerd successfully booted in 0.021887s"
	I0819 10:04:00.150315    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.701018022Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.150322    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.703929003Z" level=info msg="Loading containers: start."
	I0819 10:04:00.150338    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.774231260Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.150349    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.832584697Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0819 10:04:00.150362    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.874250689Z" level=info msg="Loading containers: done."
	I0819 10:04:00.150374    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884709929Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.150382    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884767272Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.150389    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907293087Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.150396    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907348774Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.150402    3149 command_runner.go:130] > Aug 19 17:01:48 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.150412    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.150420    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.019481735Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.150429    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020418313Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0819 10:04:00.150437    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020517778Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.150446    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020639216Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.150455    3149 command_runner.go:130] > Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020676616Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.150461    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.150467    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.150473    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.150480    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.052721036Z" level=info msg="Starting up"
	I0819 10:04:00.150599    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.053665999Z" level=info msg="containerd not running, starting managed containerd"
	I0819 10:04:00.150613    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.054204471Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1227
	I0819 10:04:00.150627    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.071110001Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0819 10:04:00.150637    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086417619Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0819 10:04:00.150645    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086519393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0819 10:04:00.150655    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086575826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0819 10:04:00.150664    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086609098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150675    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086649285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150684    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086679999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150700    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086800826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150710    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086837952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150721    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086867954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150730    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086894854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150739    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086930771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150748    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.087026239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150763    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088598589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150772    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088650891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0819 10:04:00.150786    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088784035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0819 10:04:00.150795    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088826554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0819 10:04:00.150805    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088863800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0819 10:04:00.150813    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088900283Z" level=info msg="metadata content store policy set" policy=shared
	I0819 10:04:00.150821    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089048412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0819 10:04:00.150830    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089096938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0819 10:04:00.150839    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089133463Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0819 10:04:00.150849    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089178884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0819 10:04:00.150858    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089213509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0819 10:04:00.150866    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089263884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0819 10:04:00.150875    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089475204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.150883    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089597981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0819 10:04:00.150892    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089639022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0819 10:04:00.150902    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089670206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0819 10:04:00.150912    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089699866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150921    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089728982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150930    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089757898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150939    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089787686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150948    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089821007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150958    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089859340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150969    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089892427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.150982    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089920146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0819 10:04:00.151044    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089960280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151058    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089995294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151067    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090025807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151076    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090055021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151085    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090088517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151095    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090119075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151104    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090147596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151113    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151122    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090215944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151130    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090256138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151139    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090288110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151148    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090316417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151156    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090344756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151164    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090386745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0819 10:04:00.151173    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090425469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151182    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090489354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151191    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090525304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0819 10:04:00.151200    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090598037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0819 10:04:00.151215    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090641245Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0819 10:04:00.151225    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090672551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0819 10:04:00.151238    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090701383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0819 10:04:00.151350    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090729639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0819 10:04:00.151361    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090758285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0819 10:04:00.151380    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090785175Z" level=info msg="NRI interface is disabled by configuration."
	I0819 10:04:00.151390    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090962205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0819 10:04:00.151398    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091049960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0819 10:04:00.151406    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091113179Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0819 10:04:00.151414    3149 command_runner.go:130] > Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091149051Z" level=info msg="containerd successfully booted in 0.020375s"
	I0819 10:04:00.151422    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.080403371Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0819 10:04:00.151429    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.185866595Z" level=info msg="Loading containers: start."
	I0819 10:04:00.151445    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.255656572Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0819 10:04:00.151456    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.313204760Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0819 10:04:00.151464    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.358744224Z" level=info msg="Loading containers: done."
	I0819 10:04:00.151474    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365948882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0819 10:04:00.151483    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365999910Z" level=info msg="Daemon has completed initialization"
	I0819 10:04:00.151496    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384916152Z" level=info msg="API listen on /var/run/docker.sock"
	I0819 10:04:00.151504    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384992962Z" level=info msg="API listen on [::]:2376"
	I0819 10:04:00.151510    3149 command_runner.go:130] > Aug 19 17:01:55 functional-622000 systemd[1]: Started Docker Application Container Engine.
	I0819 10:04:00.151519    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237378813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151531    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237442064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151541    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237454926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151551    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237547247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151563    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240823938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151616    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240944115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151631    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240972248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151641    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.241074980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151653    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251431426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151663    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251590345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151673    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251601329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151683    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251683938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151693    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253924695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151704    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253986191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151714    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253999192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151724    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.254059512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151734    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444251009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151744    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444317593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151754    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444336465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151767    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444427584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151777    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458785591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151787    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458823990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151805    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151815    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458891334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151865    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477642840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151878    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477748278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151887    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477759630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151896    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477819081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151908    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480734366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151918    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480804224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151928    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480826831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151938    3149 command_runner.go:130] > Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480950777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151948    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561746494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.151962    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561814928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.151972    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561824738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151982    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561890303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.151993    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765174254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152004    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765250994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152013    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765324828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152023    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765477954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152035    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798811898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152045    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798944640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152055    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798957582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152134    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.799103034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152147    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881637043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152158    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881920803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152170    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882025155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152180    3149 command_runner.go:130] > Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152190    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402231252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152200    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402303190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152214    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402316565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152224    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402385693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152234    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418387475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152244    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418603733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152254    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418627856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152263    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418851110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152273    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907392815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152283    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907863518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152297    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908056887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152307    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152317    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989553144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0819 10:04:00.152327    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989622168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0819 10:04:00.152413    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989632381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152425    3149 command_runner.go:130] > Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.992038509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0819 10:04:00.152439    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.526555515Z" level=info msg="ignoring event" container=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152449    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527066255Z" level=info msg="shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	I0819 10:04:00.152459    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527315561Z" level=warning msg="cleaning up after shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	I0819 10:04:00.152467    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527360670Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152479    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.607857375Z" level=info msg="ignoring event" container=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152493    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608302054Z" level=info msg="shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	I0819 10:04:00.152503    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608658326Z" level=warning msg="cleaning up after shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	I0819 10:04:00.152514    3149 command_runner.go:130] > Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608740170Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152521    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.158148283Z" level=info msg="Processing signal 'terminated'"
	I0819 10:04:00.152532    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	I0819 10:04:00.152543    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268535097Z" level=info msg="shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	I0819 10:04:00.152555    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.268717864Z" level=info msg="ignoring event" container=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152567    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268586609Z" level=warning msg="cleaning up after shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	I0819 10:04:00.152575    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268964831Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152590    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.273347289Z" level=info msg="ignoring event" container=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152599    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.273955655Z" level=info msg="shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	I0819 10:04:00.152609    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274023465Z" level=warning msg="cleaning up after shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	I0819 10:04:00.152617    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274033869Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152761    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290067625Z" level=info msg="ignoring event" container=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152775    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290112205Z" level=info msg="ignoring event" container=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152785    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290424043Z" level=info msg="shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	I0819 10:04:00.152800    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290536979Z" level=warning msg="cleaning up after shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	I0819 10:04:00.152808    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290582368Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152817    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290465882Z" level=info msg="shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	I0819 10:04:00.152828    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290733155Z" level=warning msg="cleaning up after shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	I0819 10:04:00.152836    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290741439Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152847    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291499508Z" level=info msg="ignoring event" container=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152858    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291535224Z" level=info msg="ignoring event" container=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152866    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290595808Z" level=info msg="shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	I0819 10:04:00.152876    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297716002Z" level=warning msg="cleaning up after shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	I0819 10:04:00.152883    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297725076Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152895    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297983983Z" level=info msg="shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	I0819 10:04:00.152904    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298045597Z" level=warning msg="cleaning up after shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	I0819 10:04:00.152912    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298148865Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152925    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302154900Z" level=info msg="ignoring event" container=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152937    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302226976Z" level=info msg="ignoring event" container=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.152946    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302717446Z" level=info msg="shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	I0819 10:04:00.152957    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302759085Z" level=warning msg="cleaning up after shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	I0819 10:04:00.152965    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302767629Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.152974    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308068913Z" level=info msg="shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	I0819 10:04:00.152984    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308118671Z" level=warning msg="cleaning up after shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	I0819 10:04:00.152996    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308328329Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153006    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311243798Z" level=info msg="shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	I0819 10:04:00.153016    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311327236Z" level=warning msg="cleaning up after shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	I0819 10:04:00.153024    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311335697Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153042    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316752567Z" level=info msg="ignoring event" container=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153053    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316800043Z" level=info msg="ignoring event" container=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153069    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316819263Z" level=info msg="ignoring event" container=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153079    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317249898Z" level=info msg="shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	I0819 10:04:00.153093    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317359801Z" level=warning msg="cleaning up after shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	I0819 10:04:00.153106    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317369184Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153116    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321910919Z" level=info msg="shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	I0819 10:04:00.153126    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321963437Z" level=warning msg="cleaning up after shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	I0819 10:04:00.153134    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321972279Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153147    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.343145333Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153159    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.343891870Z" level=info msg="ignoring event" container=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153175    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.344047521Z" level=info msg="shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	I0819 10:04:00.153190    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345641889Z" level=warning msg="cleaning up after shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	I0819 10:04:00.153200    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345650213Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153213    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.353197511Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153227    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.354463589Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153243    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.366627155Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153256    3149 command_runner.go:130] > Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.401735781Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0819 10:04:00.153269    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1220]: time="2024-08-19T17:02:54.221061363Z" level=info msg="ignoring event" container=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153279    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221240161Z" level=info msg="shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	I0819 10:04:00.153290    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221269867Z" level=warning msg="cleaning up after shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	I0819 10:04:00.153297    3149 command_runner.go:130] > Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221276283Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153312    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.230654326Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd
	I0819 10:04:00.153323    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.274755484Z" level=info msg="ignoring event" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0819 10:04:00.153334    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275154472Z" level=info msg="shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	I0819 10:04:00.153345    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275772857Z" level=warning msg="cleaning up after shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	I0819 10:04:00.153361    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275815643Z" level=info msg="cleaning up dead shim" namespace=moby
	I0819 10:04:00.153372    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.299808564Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0819 10:04:00.153379    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300197939Z" level=info msg="Daemon shutdown complete"
	I0819 10:04:00.153414    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300259721Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0819 10:04:00.153426    3149 command_runner.go:130] > Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300281777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0819 10:04:00.153433    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	I0819 10:04:00.153439    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	I0819 10:04:00.153445    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Consumed 2.502s CPU time.
	I0819 10:04:00.153454    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	I0819 10:04:00.153461    3149 command_runner.go:130] > Aug 19 17:03:00 functional-622000 dockerd[3529]: time="2024-08-19T17:03:00.342173492Z" level=info msg="Starting up"
	I0819 10:04:00.153471    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 dockerd[3529]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0819 10:04:00.153480    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0819 10:04:00.153486    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0819 10:04:00.153492    3149 command_runner.go:130] > Aug 19 17:04:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	I0819 10:04:00.188229    3149 out.go:201] 
	W0819 10:04:00.209936    3149 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:01:44 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.179943585Z" level=info msg="Starting up"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.180942482Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:44 functional-622000 dockerd[522]: time="2024-08-19T17:01:44.181508233Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=529
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.197101767Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212309114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212331640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212367467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212377477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212427828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212459845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212614080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212648283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212660789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212668790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212725662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.212870308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214380176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214415646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214516813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214549580Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214611309Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.214671792Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216534676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216610115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216626522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216638444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216647918Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216733763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.216945239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217040348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217073947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217084934Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217096633Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217105205Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217112660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217121182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217136065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217146862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217154975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217162140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217174944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217184058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217193346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217205266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217214712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217222710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217230703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217238674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217246762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217255635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217263095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217270770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217278425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217287600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217301045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217309187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217316720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217362662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217376693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217384264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217392026Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217398807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217406542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217413058Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217541797Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217596199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217626417Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:44 functional-622000 dockerd[529]: time="2024-08-19T17:01:44.217704249Z" level=info msg="containerd successfully booted in 0.021235s"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.213638513Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.218697243Z" level=info msg="Loading containers: start."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.303833103Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.394776557Z" level=info msg="Loading containers: done."
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.401999290Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.402083612Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430356737Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:45 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:45 functional-622000 dockerd[522]: time="2024-08-19T17:01:45.430518481Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.592352095Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593517361Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593620938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.593657991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:46 functional-622000 dockerd[522]: time="2024-08-19T17:01:46.594083691Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 19 17:01:46 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:47 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:47 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.633757457Z" level=info msg="Starting up"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634184054Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:47 functional-622000 dockerd[867]: time="2024-08-19T17:01:47.634821921Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=873
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.653253192Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670539137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670588711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670618159Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670627892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670647557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670655607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670761247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670822043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670833696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670840772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670856847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.670937210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672479320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672517250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672598536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672608718Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672627499Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672639411Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672775631Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672821269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672833738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672843249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672853396Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.672882179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673016560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673078296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673089866Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673100402Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673108857Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673116983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673124628Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673133352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673141618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673150296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673158127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673165754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673407110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673425300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673438713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673449750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673459416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673470226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673482043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673493250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673506067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673516910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673527469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673573561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673591400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673631719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673719578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673789779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673825158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673835448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673846514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673856283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673868043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.673875479Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674416665Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674488718Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674551662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:47 functional-622000 dockerd[873]: time="2024-08-19T17:01:47.674591532Z" level=info msg="containerd successfully booted in 0.021887s"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.701018022Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.703929003Z" level=info msg="Loading containers: start."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.774231260Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.832584697Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.874250689Z" level=info msg="Loading containers: done."
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884709929Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.884767272Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907293087Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:48 functional-622000 dockerd[867]: time="2024-08-19T17:01:48.907348774Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:48 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:01:53 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.019481735Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020418313Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020517778Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020639216Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:01:53 functional-622000 dockerd[867]: time="2024-08-19T17:01:53.020676616Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:01:54 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:01:54 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:01:54 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.052721036Z" level=info msg="Starting up"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.053665999Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:01:54 functional-622000 dockerd[1220]: time="2024-08-19T17:01:54.054204471Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1227
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.071110001Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086417619Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086519393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086575826Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086609098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086649285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086679999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086800826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086837952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086867954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086894854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.086930771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.087026239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088598589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088650891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088784035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088826554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088863800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.088900283Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089048412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089096938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089133463Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089178884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089213509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089263884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089475204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089597981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089639022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089670206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089699866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089728982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089757898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089787686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089821007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089859340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089892427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089920146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089960280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.089995294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090025807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090055021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090088517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090119075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090147596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090215944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090256138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090288110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090316417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090344756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090386745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090425469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090489354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090525304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090598037Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090641245Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090672551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090701383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090729639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090758285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090785175Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.090962205Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091049960Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091113179Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:01:54 functional-622000 dockerd[1227]: time="2024-08-19T17:01:54.091149051Z" level=info msg="containerd successfully booted in 0.020375s"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.080403371Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.185866595Z" level=info msg="Loading containers: start."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.255656572Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.313204760Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.358744224Z" level=info msg="Loading containers: done."
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365948882Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.365999910Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384916152Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:01:55 functional-622000 dockerd[1220]: time="2024-08-19T17:01:55.384992962Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:01:55 functional-622000 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237378813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237442064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237454926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.237547247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240823938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240944115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.240972248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.241074980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251431426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251590345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251601329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.251683938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253924695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253986191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.253999192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.254059512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444251009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444317593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444336465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.444427584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458785591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458823990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.458891334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477642840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477748278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477759630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.477819081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480734366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480804224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480826831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:00 functional-622000 dockerd[1227]: time="2024-08-19T17:02:00.480950777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561746494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561814928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561824738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.561890303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765174254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765250994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765324828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.765477954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798811898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798944640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.798957582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.799103034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881637043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.881920803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882025155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:10 functional-622000 dockerd[1227]: time="2024-08-19T17:02:10.882369450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402231252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402303190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402316565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.402385693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418387475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418603733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418627856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.418851110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907392815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.907863518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908056887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.908648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989553144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989622168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.989632381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:11 functional-622000 dockerd[1227]: time="2024-08-19T17:02:11.992038509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.526555515Z" level=info msg="ignoring event" container=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527066255Z" level=info msg="shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527315561Z" level=warning msg="cleaning up after shim disconnected" id=75a54acd5f43a8464f6e3bdf08d9643f5fb2c461e00b9647b10b920f4bc5ae20 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.527360670Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1220]: time="2024-08-19T17:02:21.607857375Z" level=info msg="ignoring event" container=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608302054Z" level=info msg="shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608658326Z" level=warning msg="cleaning up after shim disconnected" id=2174c907477d018c98cd122b85bb274b6102a26b3da333f30c8fbb56b73debc3 namespace=moby
	Aug 19 17:02:21 functional-622000 dockerd[1227]: time="2024-08-19T17:02:21.608740170Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.158148283Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:02:49 functional-622000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268535097Z" level=info msg="shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.268717864Z" level=info msg="ignoring event" container=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268586609Z" level=warning msg="cleaning up after shim disconnected" id=c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.268964831Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.273347289Z" level=info msg="ignoring event" container=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.273955655Z" level=info msg="shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274023465Z" level=warning msg="cleaning up after shim disconnected" id=d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.274033869Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290067625Z" level=info msg="ignoring event" container=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.290112205Z" level=info msg="ignoring event" container=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290424043Z" level=info msg="shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290536979Z" level=warning msg="cleaning up after shim disconnected" id=8c4da3df6651a7a8695c4e1ba04c28f8c7716ffac36d058dbe2240ebfd94b632 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290582368Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290465882Z" level=info msg="shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290733155Z" level=warning msg="cleaning up after shim disconnected" id=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290741439Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291499508Z" level=info msg="ignoring event" container=af41f2afe356ee323ec2e60cc5291e44d479e458e2ae162338a02e3850aca36e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.291535224Z" level=info msg="ignoring event" container=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.290595808Z" level=info msg="shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297716002Z" level=warning msg="cleaning up after shim disconnected" id=60aa0b697a31bec2bcef9bbda36567c885c612b5a25590b142c1e383c027d392 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297725076Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.297983983Z" level=info msg="shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298045597Z" level=warning msg="cleaning up after shim disconnected" id=6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.298148865Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302154900Z" level=info msg="ignoring event" container=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.302226976Z" level=info msg="ignoring event" container=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302717446Z" level=info msg="shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302759085Z" level=warning msg="cleaning up after shim disconnected" id=12d43bfdac8bd40f9de79aaf8a8595bd7bb550c50268645ef5470c1064dd0b7d namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.302767629Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308068913Z" level=info msg="shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308118671Z" level=warning msg="cleaning up after shim disconnected" id=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.308328329Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311243798Z" level=info msg="shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311327236Z" level=warning msg="cleaning up after shim disconnected" id=9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.311335697Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316752567Z" level=info msg="ignoring event" container=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316800043Z" level=info msg="ignoring event" container=91ec76fcc24ba7c3030b2e847f51a58cc30f70548da05a58200dd608ac66b290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.316819263Z" level=info msg="ignoring event" container=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317249898Z" level=info msg="shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317359801Z" level=warning msg="cleaning up after shim disconnected" id=f928650da14107107c02547ea5ef94371b9030a0ae0234921e2ad4c5f7cf7074 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.317369184Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321910919Z" level=info msg="shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321963437Z" level=warning msg="cleaning up after shim disconnected" id=94568ae18b308e1db0eccc68fdc4ba141bbac83aacc927e0480bc984deec2241 namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.321972279Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.343145333Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1220]: time="2024-08-19T17:02:49.343891870Z" level=info msg="ignoring event" container=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.344047521Z" level=info msg="shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345641889Z" level=warning msg="cleaning up after shim disconnected" id=be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.345650213Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.353197511Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.354463589Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.366627155Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:49 functional-622000 dockerd[1227]: time="2024-08-19T17:02:49.401735781Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:02:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1220]: time="2024-08-19T17:02:54.221061363Z" level=info msg="ignoring event" container=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221240161Z" level=info msg="shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221269867Z" level=warning msg="cleaning up after shim disconnected" id=5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547 namespace=moby
	Aug 19 17:02:54 functional-622000 dockerd[1227]: time="2024-08-19T17:02:54.221276283Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.230654326Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.274755484Z" level=info msg="ignoring event" container=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275154472Z" level=info msg="shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275772857Z" level=warning msg="cleaning up after shim disconnected" id=ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1227]: time="2024-08-19T17:02:59.275815643Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.299808564Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300197939Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300259721Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:02:59 functional-622000 dockerd[1220]: time="2024-08-19T17:02:59.300281777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:03:00 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:03:00 functional-622000 systemd[1]: docker.service: Consumed 2.502s CPU time.
	Aug 19 17:03:00 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:03:00 functional-622000 dockerd[3529]: time="2024-08-19T17:03:00.342173492Z" level=info msg="Starting up"
	Aug 19 17:04:00 functional-622000 dockerd[3529]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:04:00 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:04:00 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:04:00.210429    3149 out.go:270] * 
	W0819 10:04:00.211654    3149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:04:00.274709    3149 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:22:03 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:22:03 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:22:03 functional-622000 dockerd[8048]: time="2024-08-19T17:22:03.973777125Z" level=info msg="Starting up"
	Aug 19 17:23:03 functional-622000 dockerd[8048]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="error getting RW layer size for container ID '9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9df930fb96e25a030309f548ad9eaa691bb6ec9c34c3f0222287209cf0a1eca5'"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="error getting RW layer size for container ID 'ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ac04d08d92d7fb2a1de49c2d09ccf1e1ac495369196e3ee295e238a063137fbd'"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="error getting RW layer size for container ID '5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5804c49bf996f2157e77c3ce1fa8bfe12c0a05a9005bb071177e8af6aa915547'"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="error getting RW layer size for container ID 'be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:23:03 functional-622000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'be3e68635a30c2e3c5aa9bbbdc1d018971ade69741f1827171d81e59309c79aa'"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="error getting RW layer size for container ID 'c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c567be3e1fbbbd3d8bf12d31d0ff70ba434d96d4414b257ddbf0a3f0903cbf90'"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="error getting RW layer size for container ID 'd997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd997ae37ad58676adeb950972c9046b876d2024510c315d02f466bd177bd3824'"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Aug 19 17:23:03 functional-622000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:23:03 functional-622000 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="error getting RW layer size for container ID '6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:23:03 functional-622000 cri-dockerd[1120]: time="2024-08-19T17:23:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6af60647afad46f53f9f6b38a4d66bd0605b5fd8fac8aed31c5da30da84e35c5'"
	Aug 19 17:23:04 functional-622000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Aug 19 17:23:04 functional-622000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:23:04 functional-622000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-19T17:23:06Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.061352] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +2.454350] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +0.095628] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.097890] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +0.135254] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +3.642141] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.053482] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.547324] systemd-fstab-generator[1462]: Ignoring "noauto" option for root device
	[  +3.456953] systemd-fstab-generator[1592]: Ignoring "noauto" option for root device
	[  +0.049385] kauditd_printk_skb: 70 callbacks suppressed
	[Aug19 17:02] systemd-fstab-generator[1997]: Ignoring "noauto" option for root device
	[  +0.071304] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.815922] systemd-fstab-generator[2131]: Ignoring "noauto" option for root device
	[  +0.113741] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.001342] kauditd_printk_skb: 98 callbacks suppressed
	[ +26.946888] systemd-fstab-generator[3048]: Ignoring "noauto" option for root device
	[  +0.280843] systemd-fstab-generator[3084]: Ignoring "noauto" option for root device
	[  +0.156587] systemd-fstab-generator[3096]: Ignoring "noauto" option for root device
	[  +0.148300] systemd-fstab-generator[3110]: Ignoring "noauto" option for root device
	[  +5.168584] kauditd_printk_skb: 91 callbacks suppressed
	[Aug19 17:10] clocksource: timekeeping watchdog on CPU1: Marking clocksource 'tsc' as unstable because the skew is too large:
	[  +0.000086] clocksource:                       'hpet' wd_now: 49814ab6 wd_last: 48eef9da mask: ffffffff
	[  +0.000045] clocksource:                       'tsc' cs_now: 70667105109 cs_last: 705b0d6509b mask: ffffffffffffffff
	[  +0.000180] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
	[  +0.001515] clocksource: Checking clocksource tsc synchronization from CPU 1.
	
	
	==> kernel <==
	 17:24:04 up 22 min,  0 users,  load average: 0.02, 0.01, 0.00
	Linux functional-622000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 17:24:03 functional-622000 kubelet[2004]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.214096    2004 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.214254    2004 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.214516    2004 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: I0819 17:24:04.214795    2004 setters.go:600] "Node became not ready" node="functional-622000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-08-19T17:24:04Z","lastTransitionTime":"2024-08-19T17:24:04Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 21m16.645563129s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @-\u003e/var/run/docker.sock: read: connection reset by peer]"}
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.215792    2004 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.216136    2004 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: I0819 17:24:04.216884    2004 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.216445    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.217566    2004 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.216645    2004 kubelet.go:2911] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.217451    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:24:04Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:24:04Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:24:04Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-08-19T17:24:04Z\\\",\\\"lastTransitionTime\\\":\\\"2024-08-19T17:24:04Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 21m16.645563129s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to
get docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/var/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-622000\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000/status?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.216380    2004 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.218522    2004 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.218651    2004 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.216528    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.218958    2004 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.219173    2004 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.219322    2004 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.219568    2004 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.221000    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.222213    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.223697    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.225270    2004 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-622000\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-622000?timeout=10s\": dial tcp 192.169.0.4:8441: connect: connection refused"
	Aug 19 17:24:04 functional-622000 kubelet[2004]: E0819 17:24:04.225439    2004 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 10:23:03.892592    3900 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:23:03.910206    3900 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:23:03.924848    3900 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:23:03.941092    3900 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:23:03.953831    3900 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:23:03.968552    3900 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:23:03.984804    3900 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0819 10:23:04.000832    3900 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-622000 -n functional-622000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-622000 -n functional-622000: exit status 2 (159.916658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-622000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (120.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (192.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-431000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-431000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (3m9.582357236s)

                                                
                                                
-- stdout --
	* [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	* Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:27:09.441458    4789 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:27:09.441716    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441721    4789 out.go:358] Setting ErrFile to fd 2...
	I0819 10:27:09.441725    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441914    4789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:27:09.443405    4789 out.go:352] Setting JSON to false
	I0819 10:27:09.468451    4789 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3399,"bootTime":1724085030,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:27:09.468547    4789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:27:09.554597    4789 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:27:09.577770    4789 notify.go:220] Checking for updates...
	I0819 10:27:09.609734    4789 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:27:09.676944    4789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:09.699980    4789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:27:09.722951    4789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:27:09.744804    4789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.765726    4789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:27:09.787204    4789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:27:09.817679    4789 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 10:27:09.859821    4789 start.go:297] selected driver: hyperkit
	I0819 10:27:09.859849    4789 start.go:901] validating driver "hyperkit" against <nil>
	I0819 10:27:09.859893    4789 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:27:09.864287    4789 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.864395    4789 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:27:09.872759    4789 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:27:09.876743    4789 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.876768    4789 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:27:09.876803    4789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:27:09.877011    4789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:27:09.877072    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:09.877082    4789 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 10:27:09.877094    4789 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:27:09.877164    4789 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:09.877251    4789 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.919755    4789 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:27:09.940604    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:09.940675    4789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:27:09.940720    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:09.940918    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:09.940931    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:09.941271    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:09.941299    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json: {Name:mkf9dcbb24d8b9fbe62d81f81a7a87fec457d2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:09.941835    4789 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:09.941963    4789 start.go:364] duration metric: took 95.166µs to acquireMachinesLock for "ha-431000"
	I0819 10:27:09.941997    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:09.942082    4789 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 10:27:09.963791    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:09.964075    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.964148    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:09.974068    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51111
	I0819 10:27:09.974512    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:09.974919    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:09.974932    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:09.975172    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:09.975283    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:09.975374    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:09.975471    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:09.975492    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:09.975527    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:09.975578    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975594    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975657    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:09.975695    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975707    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975719    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:09.975729    4789 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0819 10:27:09.975800    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.975970    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:09.976388    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:09.976397    4789 main.go:141] libmachine: (ha-431000) Calling .Create
	I0819 10:27:09.976462    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.976580    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:09.976459    4799 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.976633    4789 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:10.160305    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.160220    4799 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa...
	I0819 10:27:10.258779    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.258678    4799 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0819 10:27:10.258792    4789 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0819 10:27:10.258800    4789 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0819 10:27:10.259681    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.259588    4799 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000 ...
	I0819 10:27:10.634434    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.634476    4789 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:27:10.634529    4789 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:27:10.744945    4789 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:27:10.744966    4789 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:10.744993    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745030    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745065    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:10.745094    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:10.745118    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:10.748020    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Pid is 4802
	I0819 10:27:10.748404    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:27:10.748413    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.748494    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:10.749357    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:10.749398    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:10.749412    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:10.749423    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:10.749431    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:10.755634    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:10.806699    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:10.807300    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:10.807314    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:10.807322    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:10.807335    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.184562    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:11.184575    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:11.299194    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:11.299213    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:11.299228    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:11.299236    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.300075    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:11.300086    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:12.750038    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0819 10:27:12.750054    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:12.750189    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:12.750969    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:12.751019    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:12.751030    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:12.751039    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:12.751052    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:14.752158    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0819 10:27:14.752174    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:14.752264    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:14.753040    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:14.753090    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:14.753102    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:14.753111    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:14.753117    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.754325    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0819 10:27:16.754340    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:16.754402    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:16.755326    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:16.755347    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:16.755354    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:16.755373    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:16.755390    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.856153    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:27:16.856252    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:27:16.856262    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:27:16.880804    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:27:18.757489    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0819 10:27:18.757504    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:18.757601    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:18.758394    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:18.758435    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:18.758449    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:18.758481    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:18.758495    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:20.758927    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0819 10:27:20.758946    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.759035    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.759848    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:20.759873    4789 main.go:141] libmachine: (ha-431000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:20.759888    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:20.759901    4789 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:27:20.759913    4789 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:27:20.759952    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:20.760523    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760634    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760741    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:27:20.760753    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:20.760839    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.760885    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.761678    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:27:20.761690    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:27:20.761696    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:27:20.761702    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:20.761795    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:20.761883    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.761969    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.762060    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:20.762168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:20.762361    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:20.762369    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:27:21.818394    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.818406    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:27:21.818419    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.818554    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.818654    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818747    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818841    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.818981    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.819131    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.819139    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:27:21.870784    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:27:21.870826    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:27:21.870831    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:27:21.870837    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.870976    4789 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:27:21.870986    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.871077    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.871169    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.871272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871352    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871452    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.871577    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.871711    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.871719    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:27:21.937676    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:27:21.937694    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.937826    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.937927    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938017    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938112    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.938245    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.938391    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.938402    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:27:21.996654    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.996676    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:27:21.996692    4789 buildroot.go:174] setting up certificates
	I0819 10:27:21.996701    4789 provision.go:84] configureAuth start
	I0819 10:27:21.996714    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.996873    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:21.996990    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.997094    4789 provision.go:143] copyHostCerts
	I0819 10:27:21.997133    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997201    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:27:21.997209    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997337    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:27:21.997534    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997567    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:27:21.997572    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997714    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:27:21.997882    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.997926    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:27:21.997941    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.998049    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:27:21.998203    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:27:22.044837    4789 provision.go:177] copyRemoteCerts
	I0819 10:27:22.044896    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:27:22.044908    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.045021    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.045107    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.045191    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.045288    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:22.078701    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:27:22.078779    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:27:22.098027    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:27:22.098092    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 10:27:22.117169    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:27:22.117235    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:27:22.137411    4789 provision.go:87] duration metric: took 140.68689ms to configureAuth
	I0819 10:27:22.137424    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:27:22.137558    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:22.137574    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:22.137700    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.137783    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.137859    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.137942    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.138028    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.138134    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.138266    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.138274    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:27:22.191384    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:27:22.191397    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:27:22.191469    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:27:22.191481    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.191636    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.191724    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191834    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191924    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.192051    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.192193    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.192236    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:27:22.256138    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:27:22.256165    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.256301    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.256391    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256475    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256578    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.256695    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.256839    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.256851    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:27:23.816844    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:27:23.816860    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:27:23.816871    4789 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0819 10:27:23.817008    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:27:23.817016    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:27:23.817020    4789 client.go:171] duration metric: took 13.841219093s to LocalClient.Create
	I0819 10:27:23.817036    4789 start.go:167] duration metric: took 13.84126124s to libmachine.API.Create "ha-431000"
	I0819 10:27:23.817044    4789 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:27:23.817051    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:27:23.817063    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.817219    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:27:23.817232    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.817321    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.817402    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.817497    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.817595    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.852993    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:27:23.857771    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:27:23.857792    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:27:23.857909    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:27:23.858094    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:27:23.858100    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:27:23.858323    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:27:23.868639    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:23.894485    4789 start.go:296] duration metric: took 77.430316ms for postStartSetup
	I0819 10:27:23.894509    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:23.895099    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.895256    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:23.895585    4789 start.go:128] duration metric: took 13.953185373s to createHost
	I0819 10:27:23.895598    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.895691    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.895790    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895879    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895966    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.896069    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:23.896228    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:23.896236    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:27:23.956133    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088443.744394113
	
	I0819 10:27:23.956145    4789 fix.go:216] guest clock: 1724088443.744394113
	I0819 10:27:23.956151    4789 fix.go:229] Guest: 2024-08-19 10:27:23.744394113 -0700 PDT Remote: 2024-08-19 10:27:23.895593 -0700 PDT m=+14.491162031 (delta=-151.198887ms)
	I0819 10:27:23.956169    4789 fix.go:200] guest clock delta is within tolerance: -151.198887ms
	I0819 10:27:23.956173    4789 start.go:83] releasing machines lock for "ha-431000", held for 14.013893151s
	I0819 10:27:23.956192    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956322    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.956416    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956749    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956860    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956951    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:27:23.956980    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957023    4789 ssh_runner.go:195] Run: cat /version.json
	I0819 10:27:23.957036    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957073    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957109    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957170    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957184    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957292    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957350    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.957384    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:24.032926    4789 ssh_runner.go:195] Run: systemctl --version
	I0819 10:27:24.037723    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:27:24.041939    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:27:24.041985    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:27:24.055424    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:27:24.055435    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.055529    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.070257    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:27:24.079169    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:27:24.088264    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.088319    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:27:24.097172    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.105902    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:27:24.114585    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.123406    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:27:24.132626    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:27:24.141378    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:27:24.150490    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:27:24.158980    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:27:24.167068    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:27:24.175030    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.269460    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:27:24.289328    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.289405    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:27:24.304907    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.317291    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:27:24.330289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.340851    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.351456    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:27:24.376914    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.387402    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.402522    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:27:24.405426    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:27:24.412799    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:27:24.426019    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:27:24.528550    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:27:24.636829    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.636893    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:27:24.652027    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.753641    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:27.037286    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.283575266s)
	I0819 10:27:27.037346    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:27:27.047775    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:27:27.062961    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.074027    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:27:27.172330    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:27:27.284593    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.395779    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:27:27.409552    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.420868    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.532356    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:27:27.591558    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:27:27.591636    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:27:27.595967    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:27:27.596013    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:27:27.599275    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:27:27.625101    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:27:27.625173    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.642636    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.693299    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:27:27.693355    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:27.693783    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:27:27.698129    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:27.708916    4789 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:27:27.708982    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:27.709038    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:27.721971    4789 docker.go:685] Got preloaded images: 
	I0819 10:27:27.721984    4789 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0819 10:27:27.722034    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:27.730353    4789 ssh_runner.go:195] Run: which lz4
	I0819 10:27:27.733218    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 10:27:27.733323    4789 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 10:27:27.736425    4789 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 10:27:27.736445    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0819 10:27:28.750864    4789 docker.go:649] duration metric: took 1.017557348s to copy over tarball
	I0819 10:27:28.750956    4789 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 10:27:31.074672    4789 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.323648699s)
	I0819 10:27:31.074688    4789 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 10:27:31.100633    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:31.109680    4789 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0819 10:27:31.123335    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:31.234501    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:33.578614    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.344043512s)
	I0819 10:27:33.578701    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:33.592021    4789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 10:27:33.592040    4789 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:27:33.592048    4789 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:27:33.592132    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:27:33.592198    4789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:27:33.629283    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:33.629295    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:33.629309    4789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:27:33.629329    4789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:27:33.629424    4789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:27:33.629439    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:27:33.629491    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:27:33.642904    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:27:33.642969    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:27:33.643018    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:27:33.652008    4789 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:27:33.652070    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:27:33.660066    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:27:33.673571    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:27:33.686700    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:27:33.700085    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0819 10:27:33.713804    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:27:33.716661    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:33.726684    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:33.822205    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:27:33.836833    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:27:33.836844    4789 certs.go:194] generating shared ca certs ...
	I0819 10:27:33.836855    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.837051    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:27:33.837132    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:27:33.837142    4789 certs.go:256] generating profile certs ...
	I0819 10:27:33.837189    4789 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:27:33.837203    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0819 10:27:33.888319    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt ...
	I0819 10:27:33.888333    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt: {Name:mk2ecc34873277fbe11bf267ec0d97684e18e84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888666    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key ...
	I0819 10:27:33.888675    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key: {Name:mk51abee214c838f4621902241303fe73ba93aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888900    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e
	I0819 10:27:33.888915    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0819 10:27:34.060027    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e ...
	I0819 10:27:34.060046    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e: {Name:mk108eb9cf88ab2aae15883e4a3724751adb3118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060347    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e ...
	I0819 10:27:34.060356    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e: {Name:mk8fae11cce9c9a45d3e151953d1ee9ab2cc82d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060557    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:27:34.060759    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:27:34.060929    4789 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:27:34.060943    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0819 10:27:34.243675    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0819 10:27:34.243690    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mkeb1eac7ee8b3901067565b7ff883710f2d1088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244061    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key ...
	I0819 10:27:34.244069    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key: {Name:mkc1afcd7a6a9a572716155e33c32e7def81650b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244312    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:27:34.244340    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:27:34.244378    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:27:34.244398    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:27:34.244416    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:27:34.244448    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:27:34.244486    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:27:34.244521    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:27:34.244615    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:27:34.244666    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:27:34.244675    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:27:34.244748    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:27:34.244776    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:27:34.244831    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:27:34.244909    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:34.244942    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.244990    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.245007    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.245522    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:27:34.267677    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:27:34.287348    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:27:34.309971    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:27:34.330910    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 10:27:34.350036    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:27:34.370663    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:27:34.390457    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:27:34.410226    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:27:34.431025    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:27:34.451232    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:27:34.471133    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:27:34.487758    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:27:34.493769    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:27:34.506308    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511941    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511996    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.519851    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:27:34.531120    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:27:34.540803    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544302    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544341    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.548724    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:27:34.558817    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:27:34.568088    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571692    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571731    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.575999    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:27:34.585057    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:27:34.588207    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:27:34.588251    4789 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:34.588345    4789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:27:34.601241    4789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:27:34.609838    4789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:27:34.618794    4789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:27:34.627200    4789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:27:34.627208    4789 kubeadm.go:157] found existing configuration files:
	
	I0819 10:27:34.627243    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:27:34.635162    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:27:34.635198    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:27:34.643336    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:27:34.651247    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:27:34.651280    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:27:34.659346    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.667240    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:27:34.667281    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.675386    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:27:34.684053    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:27:34.684105    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:27:34.692357    4789 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 10:27:34.751991    4789 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:27:34.752160    4789 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:27:34.833970    4789 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:27:34.834062    4789 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:27:34.834153    4789 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:27:34.842513    4789 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:27:34.863067    4789 out.go:235]   - Generating certificates and keys ...
	I0819 10:27:34.863126    4789 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:27:34.863179    4789 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:27:35.003012    4789 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:27:35.766829    4789 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:27:35.976153    4789 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:27:36.134850    4789 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:27:36.228947    4789 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:27:36.229166    4789 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.375842    4789 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:27:36.375934    4789 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.597289    4789 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:27:36.907219    4789 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:27:37.426404    4789 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:27:37.426585    4789 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:27:37.566387    4789 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:27:38.000620    4789 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:27:38.121335    4789 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:27:38.179042    4789 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:27:38.231270    4789 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:27:38.231752    4789 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:27:38.233818    4789 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:27:38.255454    4789 out.go:235]   - Booting up control plane ...
	I0819 10:27:38.255535    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:27:38.255605    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:27:38.255655    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:27:38.255734    4789 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:27:38.255809    4789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:27:38.255842    4789 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:27:38.364951    4789 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:27:38.365069    4789 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:27:39.366309    4789 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001984632s
	I0819 10:27:39.366388    4789 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:27:45.029099    4789 kubeadm.go:310] [api-check] The API server is healthy after 5.666724975s
	I0819 10:27:45.039440    4789 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:27:45.046481    4789 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:27:45.059797    4789 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:27:45.059959    4789 kubeadm.go:310] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:27:45.067482    4789 kubeadm.go:310] [bootstrap-token] Using token: rrr6yu.ivgebthw63l7ehzv
	I0819 10:27:45.106820    4789 out.go:235]   - Configuring RBAC rules ...
	I0819 10:27:45.107004    4789 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:27:45.110638    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:27:45.151902    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:27:45.154406    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:27:45.156223    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:27:45.158190    4789 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:27:45.434935    4789 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:27:45.846068    4789 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:27:46.434136    4789 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:27:46.434675    4789 kubeadm.go:310] 
	I0819 10:27:46.434724    4789 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:27:46.434728    4789 kubeadm.go:310] 
	I0819 10:27:46.434798    4789 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:27:46.434808    4789 kubeadm.go:310] 
	I0819 10:27:46.434829    4789 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:27:46.434881    4789 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:27:46.434925    4789 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:27:46.434930    4789 kubeadm.go:310] 
	I0819 10:27:46.434974    4789 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:27:46.434984    4789 kubeadm.go:310] 
	I0819 10:27:46.435035    4789 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:27:46.435041    4789 kubeadm.go:310] 
	I0819 10:27:46.435080    4789 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:27:46.435139    4789 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:27:46.435197    4789 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:27:46.435204    4789 kubeadm.go:310] 
	I0819 10:27:46.435268    4789 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:27:46.435333    4789 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:27:46.435337    4789 kubeadm.go:310] 
	I0819 10:27:46.435410    4789 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435498    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd \
	I0819 10:27:46.435515    4789 kubeadm.go:310] 	--control-plane 
	I0819 10:27:46.435520    4789 kubeadm.go:310] 
	I0819 10:27:46.435589    4789 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:27:46.435594    4789 kubeadm.go:310] 
	I0819 10:27:46.435664    4789 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435746    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd 
	I0819 10:27:46.435997    4789 kubeadm.go:310] W0819 17:27:34.545490    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436229    4789 kubeadm.go:310] W0819 17:27:34.546600    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436316    4789 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:27:46.436331    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:46.436337    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:46.458203    4789 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 10:27:46.517773    4789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 10:27:46.523858    4789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 10:27:46.523872    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 10:27:46.539513    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 10:27:46.759807    4789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:27:46.759878    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:46.759883    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0819 10:27:46.777623    4789 ops.go:34] apiserver oom_adj: -16
	I0819 10:27:46.926523    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.427175    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.927281    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.428033    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.926686    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.426608    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.926666    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:50.010199    4789 kubeadm.go:1113] duration metric: took 3.25030545s to wait for elevateKubeSystemPrivileges
	I0819 10:27:50.010216    4789 kubeadm.go:394] duration metric: took 15.42163041s to StartCluster
	I0819 10:27:50.010227    4789 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.010325    4789 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.010762    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.011021    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:27:50.011033    4789 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.011050    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:27:50.011076    4789 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:27:50.011116    4789 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0819 10:27:50.011120    4789 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0819 10:27:50.011148    4789 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0819 10:27:50.011152    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.011155    4789 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0819 10:27:50.011186    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.011415    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011420    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011430    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.011431    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.020667    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51134
	I0819 10:27:50.021171    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021230    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51136
	I0819 10:27:50.021523    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021533    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.021634    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021753    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.021940    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.022115    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.022146    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.022229    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.022806    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.022988    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.023051    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.024924    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.025156    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:27:50.025529    4789 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:27:50.025699    4789 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0819 10:27:50.025720    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.025937    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.025963    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.031229    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51138
	I0819 10:27:50.031604    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.031942    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.031953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.032154    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.032270    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.032358    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.032435    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.033436    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.034958    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51140
	I0819 10:27:50.035269    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.035586    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.035596    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.035796    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.036148    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.036165    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.044937    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51142
	I0819 10:27:50.045312    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.045667    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.045680    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.045893    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.045996    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.046077    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.046151    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.047102    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.047225    4789 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.047234    4789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:27:50.047243    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.047325    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.047417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.047495    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.047571    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.056055    4789 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:27:50.076134    4789 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.076146    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:27:50.076163    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.076310    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.076417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.076556    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.076664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.113554    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:27:50.127003    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.262022    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.488277    4789 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0819 10:27:50.488318    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488327    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488534    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488547    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488556    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488563    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488564    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488681    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488704    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488718    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488767    4789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:27:50.488780    4789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:27:50.488862    4789 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 10:27:50.488867    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.488877    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.488882    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495057    4789 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:27:50.495477    4789 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 10:27:50.495484    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.495490    4789 round_trippers.go:473]     Content-Type: application/json
	I0819 10:27:50.495494    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.498504    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:27:50.498632    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.498641    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.498797    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.498806    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.498814    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649595    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649607    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.649833    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.649843    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649848    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.649874    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649893    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.650019    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.650028    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.650044    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.673040    4789 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 10:27:50.709732    4789 addons.go:510] duration metric: took 698.654107ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 10:27:50.709774    4789 start.go:246] waiting for cluster config update ...
	I0819 10:27:50.709799    4789 start.go:255] writing updated cluster config ...
	I0819 10:27:50.746763    4789 out.go:201] 
	I0819 10:27:50.768467    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.768565    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.790908    4789 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:27:50.832651    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:50.832673    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:50.832790    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:50.832801    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:50.832852    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.833261    4789 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:50.833314    4789 start.go:364] duration metric: took 41.162µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:27:50.833329    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.833382    4789 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0819 10:27:50.854688    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:50.854833    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.854870    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.864309    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51147
	I0819 10:27:50.864640    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.864951    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.864963    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.865175    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.865294    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:27:50.865374    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:27:50.865472    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:50.865485    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:50.865515    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:50.865553    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865565    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865607    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:50.865634    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865649    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865666    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:50.865676    4789 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0819 10:27:50.865754    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.865776    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:27:50.891966    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:50.891987    4789 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0819 10:27:50.892145    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.892330    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:50.892137    4845 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:50.892421    4789 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:51.078705    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.078630    4845 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa...
	I0819 10:27:51.171843    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.171751    4845 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0819 10:27:51.171860    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0819 10:27:51.171868    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0819 10:27:51.172685    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.172591    4845 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02 ...
	I0819 10:27:51.544884    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.544910    4789 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:27:51.544922    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:27:51.571631    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:27:51.571653    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:51.571680    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571706    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571739    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:51.571767    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:51.571780    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:51.574668    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Pid is 4850
	I0819 10:27:51.575734    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:27:51.575757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.575783    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:51.576702    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:51.576759    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:51.576778    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:51.576816    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:51.576830    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:51.576844    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:51.582262    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:51.590515    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:51.591362    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:51.591388    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:51.591397    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:51.591407    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:51.978930    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:51.978947    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:52.094059    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:52.094091    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:52.094127    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:52.094142    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:52.094869    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:52.094879    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:53.577521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0819 10:27:53.577541    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:53.577636    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:53.578446    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:53.578461    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:53.578472    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:53.578481    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:53.578489    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:53.578507    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:55.579485    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0819 10:27:55.579501    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:55.579576    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:55.580358    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:55.580387    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:55.580414    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:55.580426    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:55.580434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:55.580442    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.581588    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0819 10:27:57.581603    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:57.581681    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:57.582486    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:57.582510    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:57.582521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:57.582530    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:57.582540    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:57.582548    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.680321    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:27:57.680434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:27:57.680445    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:27:57.704982    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:27:59.583757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0819 10:27:59.583772    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:59.583842    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:59.584652    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:59.584696    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:59.584710    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:59.584720    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:59.584729    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:59.584737    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:28:01.585137    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0819 10:28:01.585154    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.585235    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.585996    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:28:01.586042    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:28:01.586055    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:28:01.586080    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:28:01.586086    4789 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:28:01.586098    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:01.586694    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586794    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586889    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:28:01.586896    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:28:01.586980    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.587029    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.587790    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:28:01.587796    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:28:01.587800    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:28:01.587804    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:01.587881    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:01.587956    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588060    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588138    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:01.588256    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:01.588435    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:01.588443    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:28:02.645180    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.645193    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:28:02.645198    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.645326    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.645422    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645501    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645583    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.645718    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.645869    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.645877    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:28:02.700961    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:28:02.700992    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:28:02.700998    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:28:02.701003    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701132    4789 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:28:02.701143    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701237    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.701327    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.701424    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701502    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701588    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.701720    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.701855    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.701864    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:28:02.773500    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:28:02.773515    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.773649    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.773737    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773840    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773945    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.774071    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.774226    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.774237    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:28:02.838956    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.838971    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:28:02.838984    4789 buildroot.go:174] setting up certificates
	I0819 10:28:02.838992    4789 provision.go:84] configureAuth start
	I0819 10:28:02.838998    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.839135    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:02.839223    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.839322    4789 provision.go:143] copyHostCerts
	I0819 10:28:02.839347    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839393    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:28:02.839399    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839532    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:28:02.839738    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839769    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:28:02.839774    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839845    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:28:02.839992    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840021    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:28:02.840025    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840090    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:28:02.840244    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:28:02.878856    4789 provision.go:177] copyRemoteCerts
	I0819 10:28:02.878899    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:28:02.878912    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.879041    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.879132    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.879231    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.879330    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:02.914748    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:28:02.914819    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:28:02.934608    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:28:02.934673    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:28:02.954833    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:28:02.954900    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:28:02.974652    4789 provision.go:87] duration metric: took 135.649275ms to configureAuth
	I0819 10:28:02.974666    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:28:02.974809    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:02.974823    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:02.974958    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.975063    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.975147    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975219    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975328    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.975454    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.975601    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.975609    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:28:03.033628    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:28:03.033639    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:28:03.033715    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:28:03.033730    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.033861    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.033950    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034053    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034140    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.034264    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.034412    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.034459    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:28:03.102644    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:28:03.102663    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.102811    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.102898    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.102999    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.103120    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.103244    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.103390    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.103404    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:28:04.637367    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:28:04.637381    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:28:04.637388    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0819 10:28:04.637524    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:28:04.637530    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:28:04.637534    4789 client.go:171] duration metric: took 13.771742286s to LocalClient.Create
	I0819 10:28:04.637544    4789 start.go:167] duration metric: took 13.771771513s to libmachine.API.Create "ha-431000"
	I0819 10:28:04.637550    4789 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:28:04.637557    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:28:04.637566    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.637712    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:28:04.637723    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.637834    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.637926    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.638026    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.638127    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.678475    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:28:04.682965    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:28:04.682980    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:28:04.683079    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:28:04.683246    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:28:04.683253    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:28:04.683434    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:28:04.695086    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:04.723279    4789 start.go:296] duration metric: took 85.720185ms for postStartSetup
	I0819 10:28:04.723311    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:04.723943    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.724123    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:28:04.724446    4789 start.go:128] duration metric: took 13.890752069s to createHost
	I0819 10:28:04.724460    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.724558    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.724679    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724786    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724871    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.724979    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:04.725097    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:04.725103    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:28:04.784682    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088484.852271103
	
	I0819 10:28:04.784694    4789 fix.go:216] guest clock: 1724088484.852271103
	I0819 10:28:04.784698    4789 fix.go:229] Guest: 2024-08-19 10:28:04.852271103 -0700 PDT Remote: 2024-08-19 10:28:04.724454 -0700 PDT m=+55.319126445 (delta=127.817103ms)
	I0819 10:28:04.784725    4789 fix.go:200] guest clock delta is within tolerance: 127.817103ms
	I0819 10:28:04.784731    4789 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.951104834s
	I0819 10:28:04.784750    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.784884    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.807240    4789 out.go:177] * Found network options:
	I0819 10:28:04.829600    4789 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:28:04.851548    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.851607    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852495    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852747    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852876    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:28:04.852915    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:28:04.852962    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.853080    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:28:04.853100    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.853127    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853372    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853394    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853596    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853633    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853742    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853804    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.853880    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:28:04.886788    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:28:04.886847    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:28:04.931189    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:28:04.931209    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:04.931315    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:04.947443    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:28:04.955693    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:28:04.964155    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:28:04.964197    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:28:04.972493    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.980548    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:28:04.988709    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.996856    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:28:05.005271    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:28:05.013575    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:28:05.021801    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:28:05.030285    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:28:05.037842    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:28:05.045332    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.140730    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:28:05.159555    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:05.159625    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:28:05.177222    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.189624    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:28:05.203743    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.214606    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.224836    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:28:05.249649    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.261132    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:05.276191    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:28:05.279129    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:28:05.287175    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:28:05.300748    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:28:05.396444    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:28:05.505778    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:28:05.505805    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:28:05.520914    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.616215    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:28:07.911303    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.295016426s)
	I0819 10:28:07.911366    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:28:07.923467    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:28:07.938312    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:07.949283    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:28:08.046922    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:28:08.152880    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.256594    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:28:08.271339    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:08.283089    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.384798    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:28:08.441813    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:28:08.441881    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:28:08.446421    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:28:08.446473    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:28:08.449807    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:28:08.479621    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:28:08.479690    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.496571    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.537488    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:28:08.579078    4789 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:28:08.603340    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:08.603786    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:28:08.608372    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:08.618166    4789 mustload.go:65] Loading cluster: ha-431000
	I0819 10:28:08.618314    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:08.618533    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.618549    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.627122    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51170
	I0819 10:28:08.627459    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.627845    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.627857    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.628097    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.628239    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:28:08.628342    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:08.628430    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:28:08.629353    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:08.629592    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.629608    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.638041    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51172
	I0819 10:28:08.638388    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.638753    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.638770    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.638992    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.639108    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:08.639209    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:28:08.639216    4789 certs.go:194] generating shared ca certs ...
	I0819 10:28:08.639225    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.639357    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:28:08.639425    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:28:08.639434    4789 certs.go:256] generating profile certs ...
	I0819 10:28:08.639538    4789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:28:08.639562    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788
	I0819 10:28:08.639575    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0819 10:28:08.693749    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 ...
	I0819 10:28:08.693766    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788: {Name:mkade16cb35e521e9e55fc42d7cb129c8b94b782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694149    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 ...
	I0819 10:28:08.694160    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788: {Name:mkeae0a28d48da45f84299952289f15db5f944f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694378    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:28:08.694703    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:28:08.694954    4789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:28:08.694964    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:28:08.694987    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:28:08.695006    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:28:08.695024    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:28:08.695042    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:28:08.695060    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:28:08.695078    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:28:08.695096    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:28:08.695175    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:28:08.695213    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:28:08.695228    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:28:08.695261    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:28:08.695290    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:28:08.695321    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:28:08.695400    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:08.695438    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:28:08.695462    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:28:08.695482    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:08.695511    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:08.695664    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:08.695745    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:08.695845    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:08.695925    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:08.729193    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:28:08.736181    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:28:08.748665    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:28:08.751826    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:28:08.773481    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:28:08.777252    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:28:08.787546    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:28:08.791015    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:28:08.800105    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:28:08.803218    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:28:08.812240    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:28:08.815351    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:28:08.824083    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:28:08.844052    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:28:08.864107    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:28:08.884612    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:28:08.904284    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 10:28:08.924397    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:28:08.944026    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:28:08.964689    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:28:08.984934    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:28:09.004413    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:28:09.024043    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:28:09.043924    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:28:09.058066    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:28:09.071585    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:28:09.085080    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:28:09.098536    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:28:09.112048    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:28:09.125242    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:28:09.139717    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:28:09.144032    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:28:09.152602    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.155967    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.156009    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.160192    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:28:09.168568    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:28:09.176997    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180533    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180568    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.184799    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:28:09.193356    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:28:09.201811    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205453    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205494    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.209760    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:28:09.218392    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:28:09.222392    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:28:09.222437    4789 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:28:09.222498    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:28:09.222516    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:28:09.222559    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:28:09.234408    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:28:09.234452    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:28:09.234506    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.242939    4789 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:28:09.242994    4789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0819 10:28:09.251336    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 10:28:11.797289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:28:11.809069    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.809192    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.812267    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:28:11.812291    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:28:12.469259    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.469340    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.472845    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:28:12.472869    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:28:13.348737    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.348820    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.352429    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:28:13.352449    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:28:13.542994    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:28:13.550937    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:28:13.564187    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:28:13.577654    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:28:13.591433    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:28:13.594347    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:13.604347    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:13.710422    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:13.730131    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:13.730407    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:13.730448    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:13.739474    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51199
	I0819 10:28:13.739816    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:13.740174    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:13.740190    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:13.740438    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:13.740564    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:13.740661    4789 start.go:317] joinCluster: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:28:13.740750    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 10:28:13.740767    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:13.740857    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:13.740939    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:13.741027    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:13.741101    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:13.815525    4789 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:13.815563    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0819 10:28:41.108330    4789 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (27.292143754s)
	I0819 10:28:41.108351    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 10:28:41.504714    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000-m02 minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=false
	I0819 10:28:41.585348    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-431000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 10:28:41.693283    4789 start.go:319] duration metric: took 27.951997328s to joinCluster
	I0819 10:28:41.693326    4789 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:41.693537    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:41.715528    4789 out.go:177] * Verifying Kubernetes components...
	I0819 10:28:41.790354    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:41.995139    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:42.017369    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:28:42.017608    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:28:42.017650    4789 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:28:42.017827    4789 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:42.017919    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.017925    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.017930    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.017935    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.025432    4789 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 10:28:42.518902    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.518917    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.518923    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.518927    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.521742    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:43.018396    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.018411    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.018417    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.018421    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.021454    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:43.518031    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.518083    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.518106    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.518116    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.522999    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:44.018193    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.018219    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.018231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.018237    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.021854    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:44.022387    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:44.518152    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.518189    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.518196    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.518199    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.520027    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.019772    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.019792    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.019799    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.019803    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.021628    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.518039    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.518053    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.518059    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.518064    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.520113    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:46.018198    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.018232    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.018239    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.018243    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.020136    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.518474    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.518490    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.518496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.518499    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.520505    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.520916    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:47.019124    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.019150    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.019162    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.022729    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:47.518316    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.518341    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.518351    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.518356    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.520471    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:48.019594    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.019620    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.019630    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.019636    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.023447    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:48.518492    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.518526    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.518583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.518593    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.523421    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:48.523787    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:49.019217    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.019242    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.019254    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.019260    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.022862    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:49.520299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.520324    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.520337    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.520342    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.523532    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.019383    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.019412    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.019424    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.019430    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.022847    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.519489    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.519503    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.519511    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.519515    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.522131    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:51.019130    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.019153    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.019163    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.022497    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:51.022894    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:51.518391    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.518448    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.518465    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.518476    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.521848    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.019014    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.019045    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.019103    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.019117    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.022339    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.519630    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.519644    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.519651    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.519655    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.522019    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.018435    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.018460    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.018472    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.018480    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.021850    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:53.518299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.518340    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.518349    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.518355    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.520795    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.521268    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:54.020380    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.020406    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.020418    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.020423    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.024178    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:54.519346    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.519364    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.519383    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.519387    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.521155    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:55.020400    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.020425    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.020437    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.020444    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.024326    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:55.519229    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.519245    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.519264    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.519268    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.521435    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:55.521852    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:56.019678    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.019703    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.019714    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.019719    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.023317    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:56.518539    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.518563    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.518576    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.518581    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.521781    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.020424    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.020449    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.020460    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.020465    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.024114    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.519399    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.519428    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.519468    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.519475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.522788    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.523223    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:58.018734    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.018759    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.018770    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.018777    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.022242    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.518348    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.518359    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.518371    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.518375    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.522907    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.523168    4789 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:28:58.523182    4789 node_ready.go:38] duration metric: took 16.504973252s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:58.523189    4789 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:28:58.523237    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:28:58.523243    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.523249    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.523253    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.528083    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.532699    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.532761    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:28:58.532768    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.532774    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.532776    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.535978    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.536344    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.536351    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.536358    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.536361    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.538061    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.538368    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.538377    4789 pod_ready.go:82] duration metric: took 5.660556ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538383    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538413    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:28:58.538417    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.538423    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.538428    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.540013    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.540457    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.540465    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.540471    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.540475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.542120    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.542393    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.542400    4789 pod_ready.go:82] duration metric: took 4.011453ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542406    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542439    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:28:58.542444    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.542449    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.542454    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.543986    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.544340    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.544347    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.544353    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.544356    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.545868    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.546173    4789 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.546181    4789 pod_ready.go:82] duration metric: took 3.769725ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546187    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546221    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:28:58.546226    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.546231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.546234    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.547638    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.548110    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.548118    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.548123    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.548127    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.549514    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.549853    4789 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.549860    4789 pod_ready.go:82] duration metric: took 3.668598ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.549868    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.718822    4789 request.go:632] Waited for 168.888912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718861    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718867    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.718872    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.718876    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.721032    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:58.919673    4789 request.go:632] Waited for 198.011193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919731    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919740    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.919750    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.919807    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.923236    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.923670    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.923682    4789 pod_ready.go:82] duration metric: took 373.799986ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.923691    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.119399    4789 request.go:632] Waited for 195.629207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119559    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119572    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.119583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.119589    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.122804    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.318619    4789 request.go:632] Waited for 195.030736ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318674    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318695    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.318702    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.318705    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.320812    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.321165    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.321173    4789 pod_ready.go:82] duration metric: took 397.466691ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.321180    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.520541    4789 request.go:632] Waited for 199.292765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520642    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520652    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.520663    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.520672    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.524463    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.718728    4789 request.go:632] Waited for 192.615056ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718803    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718811    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.718818    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.718823    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.720955    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.721397    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.721407    4789 pod_ready.go:82] duration metric: took 400.213219ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.721415    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.918907    4789 request.go:632] Waited for 197.434904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919004    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919014    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.919024    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.919030    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.922451    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.119192    4789 request.go:632] Waited for 196.220574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119263    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119272    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.119286    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.119297    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.122630    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.122957    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.122968    4789 pod_ready.go:82] duration metric: took 401.538458ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.122977    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.320524    4789 request.go:632] Waited for 197.475989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320660    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320672    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.320681    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.320689    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.323985    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.519403    4789 request.go:632] Waited for 194.628597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519535    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519546    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.519560    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.519568    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.523121    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.523435    4789 pod_ready.go:93] pod "kube-proxy-5h7j2" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.523449    4789 pod_ready.go:82] duration metric: took 400.456993ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.523457    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.718666    4789 request.go:632] Waited for 195.15054ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718742    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718752    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.718786    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.718800    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.721920    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.918782    4789 request.go:632] Waited for 196.40919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918873    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918882    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.918896    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.918906    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.922355    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.922815    4789 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.922824    4789 pod_ready.go:82] duration metric: took 399.351509ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.922830    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.118854    4789 request.go:632] Waited for 195.977175ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118950    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118965    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.118981    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.118987    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.122683    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.318886    4789 request.go:632] Waited for 195.887859ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319029    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319042    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.319053    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.319063    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.322689    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.323187    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.323200    4789 pod_ready.go:82] duration metric: took 400.355182ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.323208    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.518928    4789 request.go:632] Waited for 195.662505ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519043    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519057    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.519070    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.519077    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.522736    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.718819    4789 request.go:632] Waited for 195.65197ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718885    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718891    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.718899    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.718905    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.721246    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:29:01.721682    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.721691    4789 pod_ready.go:82] duration metric: took 398.467113ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.721701    4789 pod_ready.go:39] duration metric: took 3.198431164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:29:01.721718    4789 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:29:01.721774    4789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:29:01.735634    4789 api_server.go:72] duration metric: took 20.041851081s to wait for apiserver process to appear ...
	I0819 10:29:01.735647    4789 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:29:01.735663    4789 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:29:01.738815    4789 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:29:01.738848    4789 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:29:01.738854    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.738860    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.738864    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.739526    4789 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:29:01.739580    4789 api_server.go:141] control plane version: v1.31.0
	I0819 10:29:01.739589    4789 api_server.go:131] duration metric: took 3.937962ms to wait for apiserver health ...
	I0819 10:29:01.739594    4789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:29:01.918638    4789 request.go:632] Waited for 178.995687ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918733    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918745    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.918757    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.918762    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.922864    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:01.926606    4789 system_pods.go:59] 17 kube-system pods found
	I0819 10:29:01.926628    4789 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:01.926633    4789 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:01.926636    4789 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:01.926640    4789 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:01.926642    4789 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:01.926645    4789 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:01.926647    4789 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:01.926650    4789 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:01.926653    4789 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:01.926656    4789 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:01.926659    4789 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:01.926666    4789 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:01.926669    4789 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:01.926672    4789 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:01.926674    4789 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:01.926677    4789 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:01.926680    4789 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:01.926684    4789 system_pods.go:74] duration metric: took 187.080965ms to wait for pod list to return data ...
	I0819 10:29:01.926689    4789 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:29:02.119406    4789 request.go:632] Waited for 192.625822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119507    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119517    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.119528    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.119535    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.123120    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.123283    4789 default_sa.go:45] found service account: "default"
	I0819 10:29:02.123293    4789 default_sa.go:55] duration metric: took 196.595366ms for default service account to be created ...
	I0819 10:29:02.123300    4789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:29:02.319795    4789 request.go:632] Waited for 196.43255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319928    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319939    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.319947    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.319954    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.324586    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:02.328058    4789 system_pods.go:86] 17 kube-system pods found
	I0819 10:29:02.328071    4789 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:02.328075    4789 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:02.328078    4789 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:02.328083    4789 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:02.328086    4789 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:02.328088    4789 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:02.328091    4789 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:02.328093    4789 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:02.328096    4789 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:02.328098    4789 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:02.328101    4789 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:02.328103    4789 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:02.328106    4789 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:02.328109    4789 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:02.328112    4789 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:02.328115    4789 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:02.328117    4789 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:02.328122    4789 system_pods.go:126] duration metric: took 204.813151ms to wait for k8s-apps to be running ...
	I0819 10:29:02.328133    4789 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:29:02.328183    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:29:02.340002    4789 system_svc.go:56] duration metric: took 11.865981ms WaitForService to wait for kubelet
	I0819 10:29:02.340017    4789 kubeadm.go:582] duration metric: took 20.646222268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:29:02.340034    4789 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:29:02.518831    4789 request.go:632] Waited for 178.726274ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518969    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518980    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.518991    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.518998    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.522659    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.523326    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523339    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523348    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523351    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523354    4789 node_conditions.go:105] duration metric: took 183.311856ms to run NodePressure ...
	I0819 10:29:02.523361    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:29:02.523378    4789 start.go:255] writing updated cluster config ...
	I0819 10:29:02.544110    4789 out.go:201] 
	I0819 10:29:02.566227    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:02.566358    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.588965    4789 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:29:02.630777    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:29:02.630803    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:29:02.630953    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:29:02.630966    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:29:02.631053    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.631767    4789 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:29:02.631849    4789 start.go:364] duration metric: took 64.609µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:29:02.631869    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:29:02.631978    4789 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0819 10:29:02.652968    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:29:02.653116    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:29:02.653158    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:29:02.663539    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51204
	I0819 10:29:02.663925    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:29:02.664263    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:29:02.664277    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:29:02.664539    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:29:02.664672    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:02.664758    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:02.664867    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:29:02.664899    4789 client.go:168] LocalClient.Create starting
	I0819 10:29:02.664932    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:29:02.664992    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665005    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665051    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:29:02.665087    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665103    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665116    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:29:02.665122    4789 main.go:141] libmachine: (ha-431000-m03) Calling .PreCreateCheck
	I0819 10:29:02.665218    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.665228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:02.674109    4789 main.go:141] libmachine: Creating machine...
	I0819 10:29:02.674126    4789 main.go:141] libmachine: (ha-431000-m03) Calling .Create
	I0819 10:29:02.674302    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.674550    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.674293    4918 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:29:02.674675    4789 main.go:141] libmachine: (ha-431000-m03) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:29:02.956098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.955977    4918 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa...
	I0819 10:29:03.041212    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.041121    4918 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk...
	I0819 10:29:03.041230    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing magic tar header
	I0819 10:29:03.041239    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing SSH key tar header
	I0819 10:29:03.042098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.042003    4918 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03 ...
	I0819 10:29:03.582755    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.582783    4789 main.go:141] libmachine: (ha-431000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:29:03.582846    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:29:03.618942    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:29:03.618960    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:29:03.619021    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619049    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619085    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:29:03.619116    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:29:03.619133    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:29:03.621990    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Pid is 4921
	I0819 10:29:03.622461    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:29:03.622497    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.622585    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:03.623424    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:03.623486    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:03.623500    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:03.623537    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:03.623548    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:03.623558    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:03.623568    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:03.629643    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:29:03.638725    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:29:03.639577    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:03.639599    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:03.639609    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:03.639622    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.022361    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:29:04.022375    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:29:04.137228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:04.137262    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:04.137274    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:04.137284    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.138001    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:29:04.138016    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:29:05.623879    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 1
	I0819 10:29:05.623896    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:05.624023    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:05.624809    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:05.624873    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:05.624888    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:05.624904    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:05.624917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:05.624926    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:05.624935    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:07.626679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 2
	I0819 10:29:07.626696    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:07.626779    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:07.627539    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:07.627582    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:07.627592    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:07.627610    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:07.627619    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:07.627626    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:07.627635    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.627812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 3
	I0819 10:29:09.627828    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:09.627917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:09.628679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:09.628746    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:09.628777    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:09.628791    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:09.628799    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:09.628806    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:09.628812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.722721    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:29:09.722792    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:29:09.722802    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:29:09.745848    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:29:11.630390    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 4
	I0819 10:29:11.630407    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:11.630495    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:11.631275    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:11.631321    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:11.631331    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:11.631340    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:11.631359    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:11.631366    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:11.631387    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:13.633236    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 5
	I0819 10:29:13.633251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.633339    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.634147    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:13.634209    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0819 10:29:13.634221    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:29:13.634228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:29:13.634232    4789 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:29:13.634299    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:13.634943    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635064    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635157    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:29:13.635165    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:29:13.635251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.635310    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.636120    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:29:13.636129    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:29:13.636133    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:29:13.636138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:13.636228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:13.636309    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636392    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636477    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:13.636587    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:13.636755    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:13.636763    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:29:14.697546    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.697558    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:29:14.697564    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.697702    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.697798    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.697887    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.698009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.698168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.698318    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.698326    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:29:14.765778    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:29:14.765827    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:29:14.765833    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:29:14.765839    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.765977    4789 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:29:14.765988    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.766081    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.766185    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.766270    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766369    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766481    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.766635    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.766783    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.766792    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:29:14.841753    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:29:14.841769    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.841901    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.842009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842101    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842195    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.842324    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.842477    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.842489    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:29:14.911764    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.911779    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:29:14.911793    4789 buildroot.go:174] setting up certificates
	I0819 10:29:14.911800    4789 provision.go:84] configureAuth start
	I0819 10:29:14.911807    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.911942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:14.912037    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.912110    4789 provision.go:143] copyHostCerts
	I0819 10:29:14.912141    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912187    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:29:14.912193    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912326    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:29:14.912504    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912534    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:29:14.912539    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912651    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:29:14.912808    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912854    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:29:14.912859    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912935    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:29:14.913083    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:29:15.064390    4789 provision.go:177] copyRemoteCerts
	I0819 10:29:15.064440    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:29:15.064455    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.064599    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.064695    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.064786    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.064886    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:15.103656    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:29:15.103727    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:29:15.123430    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:29:15.123497    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:29:15.143265    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:29:15.143333    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:29:15.162885    4789 provision.go:87] duration metric: took 251.064942ms to configureAuth
	I0819 10:29:15.162900    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:29:15.163052    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:15.163065    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:15.163221    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.163329    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.163417    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163506    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.163693    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.163824    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.163831    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:29:15.225270    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:29:15.225286    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:29:15.225356    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:29:15.225368    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.225510    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.225619    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225708    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225810    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.225948    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.226090    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.226134    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:29:15.299640    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:29:15.299658    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.299797    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.299889    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.299978    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.300067    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.300202    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.300355    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.300368    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:29:16.819930    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:29:16.819945    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:29:16.819953    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetURL
	I0819 10:29:16.820095    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:29:16.820107    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:29:16.820113    4789 client.go:171] duration metric: took 14.154897138s to LocalClient.Create
	I0819 10:29:16.820124    4789 start.go:167] duration metric: took 14.154947877s to libmachine.API.Create "ha-431000"
	I0819 10:29:16.820129    4789 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:29:16.820136    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:29:16.820145    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.820288    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:29:16.820301    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.820396    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.820494    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.820582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.820664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:16.862693    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:29:16.866416    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:29:16.866431    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:29:16.866540    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:29:16.866725    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:29:16.866732    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:29:16.866944    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:29:16.874578    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:29:16.904910    4789 start.go:296] duration metric: took 84.771069ms for postStartSetup
	I0819 10:29:16.904942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:16.905569    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.905740    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:16.906122    4789 start.go:128] duration metric: took 14.273822612s to createHost
	I0819 10:29:16.906138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.906230    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.906303    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906387    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906475    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.906573    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:16.906690    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:16.906697    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:29:16.969389    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088556.958185685
	
	I0819 10:29:16.969401    4789 fix.go:216] guest clock: 1724088556.958185685
	I0819 10:29:16.969406    4789 fix.go:229] Guest: 2024-08-19 10:29:16.958185685 -0700 PDT Remote: 2024-08-19 10:29:16.906131 -0700 PDT m=+127.499217490 (delta=52.054685ms)
	I0819 10:29:16.969416    4789 fix.go:200] guest clock delta is within tolerance: 52.054685ms
	I0819 10:29:16.969419    4789 start.go:83] releasing machines lock for "ha-431000-m03", held for 14.337247496s
	I0819 10:29:16.969437    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.969573    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.992258    4789 out.go:177] * Found network options:
	I0819 10:29:17.014265    4789 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:29:17.037508    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.037542    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.037561    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038432    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038682    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038835    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:29:17.038873    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:29:17.038922    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.038957    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.039067    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:29:17.039087    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:17.039116    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039298    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039332    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039497    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039590    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039679    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039721    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:17.039809    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:29:17.074320    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:29:17.074385    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:29:17.120302    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:29:17.120318    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.120398    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.135851    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:29:17.144402    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:29:17.152735    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.152784    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:29:17.161185    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.169599    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:29:17.177908    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.186319    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:29:17.194967    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:29:17.203702    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:29:17.212228    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:29:17.220632    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:29:17.228164    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:29:17.235717    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.329551    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:29:17.348829    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.348909    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:29:17.363903    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.374976    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:29:17.393061    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.404238    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.414728    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:29:17.438632    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.449143    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.464536    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:29:17.467445    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:29:17.474809    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:29:17.488421    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:29:17.581504    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:29:17.684960    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.684980    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:29:17.699658    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.803979    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:30:18.773891    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.968555005s)
	I0819 10:30:18.774012    4789 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:30:18.808676    4789 out.go:201] 
	W0819 10:30:18.829152    4789 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:29:15 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570013158Z" level=info msg="Starting up"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570447745Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.572542412Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.584880924Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603137975Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603181724Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603219390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603233227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603303033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603338653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603471354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603509282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603521199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603528665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603591360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603811486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605351283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605389063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605504861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605538594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605610859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605677674Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607907354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607976584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607991948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608010711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608023403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608093276Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608724366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608874333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608913351Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608929178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608943960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608968346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609006571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609021660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609032833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609044499Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609055485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609066063Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609088279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609103865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609115537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609130257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609139734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609151164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609161605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609173829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609185591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609200246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609211000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609224200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609237871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609251525Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609296616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609316285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609327369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609362155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609478815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609512436Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609530768Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609541857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609553085Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609563545Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610591556Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610680787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610769049Z" level=info msg="containerd successfully booted in 0.026402s"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.601341697Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.606766805Z" level=info msg="Loading containers: start."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.688780306Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.769433920Z" level=info msg="Loading containers: done."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776749571Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776865122Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.804822251Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.805010917Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:29:16 ha-431000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.814047535Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:29:17 ha-431000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815466623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815881336Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815956644Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.816022765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:18 ha-431000-m03 dockerd[921]: time="2024-08-19T17:29:18.853267859Z" level=info msg="Starting up"
	Aug 19 17:30:18 ha-431000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:29:15 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570013158Z" level=info msg="Starting up"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570447745Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.572542412Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.584880924Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603137975Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603181724Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603219390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603233227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603303033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603338653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603471354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603509282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603521199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603528665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603591360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603811486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605351283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605389063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605504861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605538594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605610859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605677674Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607907354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607976584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607991948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608010711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608023403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608093276Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608724366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608874333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608913351Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608929178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608943960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608968346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609006571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609021660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609032833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609044499Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609055485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609066063Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609088279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609103865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609115537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609130257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609139734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609151164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609161605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609173829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609185591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609200246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609211000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609224200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609237871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609251525Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609296616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609316285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609327369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609362155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609478815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609512436Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609530768Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609541857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609553085Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609563545Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610591556Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610680787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610769049Z" level=info msg="containerd successfully booted in 0.026402s"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.601341697Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.606766805Z" level=info msg="Loading containers: start."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.688780306Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.769433920Z" level=info msg="Loading containers: done."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776749571Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776865122Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.804822251Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.805010917Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:29:16 ha-431000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.814047535Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:29:17 ha-431000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815466623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815881336Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815956644Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.816022765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:18 ha-431000-m03 dockerd[921]: time="2024-08-19T17:29:18.853267859Z" level=info msg="Starting up"
	Aug 19 17:30:18 ha-431000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:30:18.829235    4789 out.go:270] * 
	* 
	W0819 10:30:18.830413    4789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:30:18.888275    4789 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 start -p ha-431000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (2.231148394s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-----------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                     Args                      |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-622000 image ls                    | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	| image          | functional-622000 image load                  | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | /Users/jenkins/workspace/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                             |                   |         |         |                     |                     |
	| image          | functional-622000 image ls                    | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	| image          | functional-622000 image save --daemon         | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | kicbase/echo-server:functional-622000         |                   |         |         |                     |                     |
	|                | --alsologtostderr                             |                   |         |         |                     |                     |
	| ssh            | functional-622000 ssh sudo cat                | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | /etc/ssl/certs/2174.pem                       |                   |         |         |                     |                     |
	| ssh            | functional-622000 ssh sudo cat                | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | /usr/share/ca-certificates/2174.pem           |                   |         |         |                     |                     |
	| ssh            | functional-622000 ssh sudo cat                | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | /etc/ssl/certs/51391683.0                     |                   |         |         |                     |                     |
	| ssh            | functional-622000 ssh sudo cat                | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | /etc/ssl/certs/21742.pem                      |                   |         |         |                     |                     |
	| ssh            | functional-622000 ssh sudo cat                | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | /usr/share/ca-certificates/21742.pem          |                   |         |         |                     |                     |
	| ssh            | functional-622000 ssh sudo cat                | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | /etc/ssl/certs/3ec20f2e.0                     |                   |         |         |                     |                     |
	| docker-env     | functional-622000 docker-env                  | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	| docker-env     | functional-622000 docker-env                  | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	| ssh            | functional-622000 ssh sudo cat                | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | /etc/test/nested/copy/2174/hosts              |                   |         |         |                     |                     |
	| image          | functional-622000                             | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | image ls --format short                       |                   |         |         |                     |                     |
	|                | --alsologtostderr                             |                   |         |         |                     |                     |
	| image          | functional-622000                             | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | image ls --format yaml                        |                   |         |         |                     |                     |
	|                | --alsologtostderr                             |                   |         |         |                     |                     |
	| image          | functional-622000                             | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | image ls --format json                        |                   |         |         |                     |                     |
	|                | --alsologtostderr                             |                   |         |         |                     |                     |
	| image          | functional-622000                             | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | image ls --format table                       |                   |         |         |                     |                     |
	|                | --alsologtostderr                             |                   |         |         |                     |                     |
	| ssh            | functional-622000 ssh pgrep                   | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT |                     |
	|                | buildkitd                                     |                   |         |         |                     |                     |
	| image          | functional-622000 image build -t              | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | localhost/my-image:functional-622000          |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr              |                   |         |         |                     |                     |
	| image          | functional-622000 image ls                    | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	| update-context | functional-622000                             | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | update-context                                |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                        |                   |         |         |                     |                     |
	| update-context | functional-622000                             | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | update-context                                |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                        |                   |         |         |                     |                     |
	| update-context | functional-622000                             | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:26 PDT | 19 Aug 24 10:26 PDT |
	|                | update-context                                |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                        |                   |         |         |                     |                     |
	| delete         | -p functional-622000                          | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:27 PDT | 19 Aug 24 10:27 PDT |
	| start          | -p ha-431000 --wait=true                      | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:27 PDT |                     |
	|                | --memory=2200 --ha                            |                   |         |         |                     |                     |
	|                | -v=7 --alsologtostderr                        |                   |         |         |                     |                     |
	|                | --driver=hyperkit                             |                   |         |         |                     |                     |
	|----------------|-----------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:27:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:27:09.441458    4789 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:27:09.441716    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441721    4789 out.go:358] Setting ErrFile to fd 2...
	I0819 10:27:09.441725    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441914    4789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:27:09.443405    4789 out.go:352] Setting JSON to false
	I0819 10:27:09.468451    4789 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3399,"bootTime":1724085030,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:27:09.468547    4789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:27:09.554597    4789 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:27:09.577770    4789 notify.go:220] Checking for updates...
	I0819 10:27:09.609734    4789 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:27:09.676944    4789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:09.699980    4789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:27:09.722951    4789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:27:09.744804    4789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.765726    4789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:27:09.787204    4789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:27:09.817679    4789 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 10:27:09.859821    4789 start.go:297] selected driver: hyperkit
	I0819 10:27:09.859849    4789 start.go:901] validating driver "hyperkit" against <nil>
	I0819 10:27:09.859893    4789 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:27:09.864287    4789 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.864395    4789 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:27:09.872759    4789 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:27:09.876743    4789 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.876768    4789 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:27:09.876803    4789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:27:09.877011    4789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:27:09.877072    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:09.877082    4789 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 10:27:09.877094    4789 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:27:09.877164    4789 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:09.877251    4789 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.919755    4789 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:27:09.940604    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:09.940675    4789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:27:09.940720    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:09.940918    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:09.940931    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:09.941271    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:09.941299    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json: {Name:mkf9dcbb24d8b9fbe62d81f81a7a87fec457d2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:09.941835    4789 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:09.941963    4789 start.go:364] duration metric: took 95.166µs to acquireMachinesLock for "ha-431000"
	I0819 10:27:09.941997    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:09.942082    4789 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 10:27:09.963791    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:09.964075    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.964148    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:09.974068    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51111
	I0819 10:27:09.974512    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:09.974919    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:09.974932    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:09.975172    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:09.975283    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:09.975374    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:09.975471    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:09.975492    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:09.975527    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:09.975578    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975594    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975657    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:09.975695    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975707    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975719    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:09.975729    4789 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0819 10:27:09.975800    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.975970    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:09.976388    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:09.976397    4789 main.go:141] libmachine: (ha-431000) Calling .Create
	I0819 10:27:09.976462    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.976580    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:09.976459    4799 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.976633    4789 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:10.160305    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.160220    4799 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa...
	I0819 10:27:10.258779    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.258678    4799 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0819 10:27:10.258792    4789 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0819 10:27:10.258800    4789 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0819 10:27:10.259681    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.259588    4799 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000 ...
	I0819 10:27:10.634434    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.634476    4789 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:27:10.634529    4789 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:27:10.744945    4789 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:27:10.744966    4789 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:10.744993    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745030    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745065    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:10.745094    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:10.745118    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:10.748020    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Pid is 4802
	I0819 10:27:10.748404    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:27:10.748413    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.748494    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:10.749357    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:10.749398    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:10.749412    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:10.749423    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:10.749431    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:10.755634    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:10.806699    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:10.807300    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:10.807314    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:10.807322    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:10.807335    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.184562    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:11.184575    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:11.299194    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:11.299213    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:11.299228    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:11.299236    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.300075    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:11.300086    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:12.750038    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0819 10:27:12.750054    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:12.750189    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:12.750969    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:12.751019    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:12.751030    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:12.751039    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:12.751052    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:14.752158    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0819 10:27:14.752174    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:14.752264    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:14.753040    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:14.753090    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:14.753102    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:14.753111    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:14.753117    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.754325    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0819 10:27:16.754340    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:16.754402    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:16.755326    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:16.755347    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:16.755354    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:16.755373    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:16.755390    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.856153    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:27:16.856252    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:27:16.856262    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:27:16.880804    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:27:18.757489    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0819 10:27:18.757504    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:18.757601    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:18.758394    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:18.758435    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:18.758449    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:18.758481    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:18.758495    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:20.758927    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0819 10:27:20.758946    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.759035    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.759848    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:20.759873    4789 main.go:141] libmachine: (ha-431000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:20.759888    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:20.759901    4789 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:27:20.759913    4789 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:27:20.759952    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:20.760523    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760634    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760741    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:27:20.760753    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:20.760839    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.760885    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.761678    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:27:20.761690    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:27:20.761696    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:27:20.761702    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:20.761795    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:20.761883    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.761969    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.762060    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:20.762168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:20.762361    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:20.762369    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:27:21.818394    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.818406    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:27:21.818419    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.818554    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.818654    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818747    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818841    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.818981    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.819131    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.819139    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:27:21.870784    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:27:21.870826    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:27:21.870831    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:27:21.870837    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.870976    4789 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:27:21.870986    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.871077    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.871169    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.871272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871352    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871452    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.871577    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.871711    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.871719    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:27:21.937676    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:27:21.937694    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.937826    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.937927    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938017    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938112    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.938245    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.938391    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.938402    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:27:21.996654    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.996676    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:27:21.996692    4789 buildroot.go:174] setting up certificates
	I0819 10:27:21.996701    4789 provision.go:84] configureAuth start
	I0819 10:27:21.996714    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.996873    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:21.996990    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.997094    4789 provision.go:143] copyHostCerts
	I0819 10:27:21.997133    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997201    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:27:21.997209    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997337    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:27:21.997534    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997567    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:27:21.997572    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997714    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:27:21.997882    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.997926    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:27:21.997941    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.998049    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:27:21.998203    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:27:22.044837    4789 provision.go:177] copyRemoteCerts
	I0819 10:27:22.044896    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:27:22.044908    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.045021    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.045107    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.045191    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.045288    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:22.078701    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:27:22.078779    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:27:22.098027    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:27:22.098092    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 10:27:22.117169    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:27:22.117235    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:27:22.137411    4789 provision.go:87] duration metric: took 140.68689ms to configureAuth
	I0819 10:27:22.137424    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:27:22.137558    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:22.137574    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:22.137700    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.137783    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.137859    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.137942    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.138028    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.138134    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.138266    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.138274    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:27:22.191384    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:27:22.191397    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:27:22.191469    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:27:22.191481    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.191636    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.191724    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191834    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191924    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.192051    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.192193    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.192236    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:27:22.256138    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:27:22.256165    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.256301    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.256391    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256475    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256578    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.256695    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.256839    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.256851    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:27:23.816844    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:27:23.816860    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:27:23.816871    4789 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0819 10:27:23.817008    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:27:23.817016    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:27:23.817020    4789 client.go:171] duration metric: took 13.841219093s to LocalClient.Create
	I0819 10:27:23.817036    4789 start.go:167] duration metric: took 13.84126124s to libmachine.API.Create "ha-431000"
	I0819 10:27:23.817044    4789 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:27:23.817051    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:27:23.817063    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.817219    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:27:23.817232    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.817321    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.817402    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.817497    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.817595    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.852993    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:27:23.857771    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:27:23.857792    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:27:23.857909    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:27:23.858094    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:27:23.858100    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:27:23.858323    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:27:23.868639    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:23.894485    4789 start.go:296] duration metric: took 77.430316ms for postStartSetup
	I0819 10:27:23.894509    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:23.895099    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.895256    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:23.895585    4789 start.go:128] duration metric: took 13.953185373s to createHost
	I0819 10:27:23.895598    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.895691    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.895790    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895879    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895966    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.896069    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:23.896228    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:23.896236    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:27:23.956133    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088443.744394113
	
	I0819 10:27:23.956145    4789 fix.go:216] guest clock: 1724088443.744394113
	I0819 10:27:23.956151    4789 fix.go:229] Guest: 2024-08-19 10:27:23.744394113 -0700 PDT Remote: 2024-08-19 10:27:23.895593 -0700 PDT m=+14.491162031 (delta=-151.198887ms)
	I0819 10:27:23.956169    4789 fix.go:200] guest clock delta is within tolerance: -151.198887ms
	I0819 10:27:23.956173    4789 start.go:83] releasing machines lock for "ha-431000", held for 14.013893151s
	I0819 10:27:23.956192    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956322    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.956416    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956749    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956860    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956951    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:27:23.956980    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957023    4789 ssh_runner.go:195] Run: cat /version.json
	I0819 10:27:23.957036    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957073    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957109    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957170    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957184    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957292    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957350    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.957384    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:24.032926    4789 ssh_runner.go:195] Run: systemctl --version
	I0819 10:27:24.037723    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:27:24.041939    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:27:24.041985    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:27:24.055424    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:27:24.055435    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.055529    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.070257    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:27:24.079169    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:27:24.088264    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.088319    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:27:24.097172    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.105902    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:27:24.114585    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.123406    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:27:24.132626    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:27:24.141378    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:27:24.150490    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:27:24.158980    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:27:24.167068    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:27:24.175030    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.269460    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:27:24.289328    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.289405    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:27:24.304907    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.317291    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:27:24.330289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.340851    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.351456    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:27:24.376914    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.387402    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.402522    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:27:24.405426    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:27:24.412799    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:27:24.426019    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:27:24.528550    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:27:24.636829    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.636893    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:27:24.652027    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.753641    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:27.037286    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.283575266s)
	I0819 10:27:27.037346    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:27:27.047775    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:27:27.062961    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.074027    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:27:27.172330    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:27:27.284593    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.395779    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:27:27.409552    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.420868    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.532356    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:27:27.591558    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:27:27.591636    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:27:27.595967    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:27:27.596013    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:27:27.599275    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:27:27.625101    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:27:27.625173    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.642636    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.693299    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:27:27.693355    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:27.693783    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:27:27.698129    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:27.708916    4789 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:27:27.708982    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:27.709038    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:27.721971    4789 docker.go:685] Got preloaded images: 
	I0819 10:27:27.721984    4789 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0819 10:27:27.722034    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:27.730353    4789 ssh_runner.go:195] Run: which lz4
	I0819 10:27:27.733218    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 10:27:27.733323    4789 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 10:27:27.736425    4789 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 10:27:27.736445    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0819 10:27:28.750864    4789 docker.go:649] duration metric: took 1.017557348s to copy over tarball
	I0819 10:27:28.750956    4789 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 10:27:31.074672    4789 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.323648699s)
	I0819 10:27:31.074688    4789 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 10:27:31.100633    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:31.109680    4789 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0819 10:27:31.123335    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:31.234501    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:33.578614    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.344043512s)
	I0819 10:27:33.578701    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:33.592021    4789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 10:27:33.592040    4789 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:27:33.592048    4789 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:27:33.592132    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:27:33.592198    4789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:27:33.629283    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:33.629295    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:33.629309    4789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:27:33.629329    4789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:27:33.629424    4789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:27:33.629439    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:27:33.629491    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:27:33.642904    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:27:33.642969    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:27:33.643018    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:27:33.652008    4789 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:27:33.652070    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:27:33.660066    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:27:33.673571    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:27:33.686700    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:27:33.700085    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0819 10:27:33.713804    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:27:33.716661    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:33.726684    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:33.822205    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:27:33.836833    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:27:33.836844    4789 certs.go:194] generating shared ca certs ...
	I0819 10:27:33.836855    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.837051    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:27:33.837132    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:27:33.837142    4789 certs.go:256] generating profile certs ...
	I0819 10:27:33.837189    4789 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:27:33.837203    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0819 10:27:33.888319    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt ...
	I0819 10:27:33.888333    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt: {Name:mk2ecc34873277fbe11bf267ec0d97684e18e84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888666    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key ...
	I0819 10:27:33.888675    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key: {Name:mk51abee214c838f4621902241303fe73ba93aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888900    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e
	I0819 10:27:33.888915    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0819 10:27:34.060027    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e ...
	I0819 10:27:34.060046    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e: {Name:mk108eb9cf88ab2aae15883e4a3724751adb3118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060347    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e ...
	I0819 10:27:34.060356    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e: {Name:mk8fae11cce9c9a45d3e151953d1ee9ab2cc82d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060557    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:27:34.060759    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:27:34.060929    4789 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:27:34.060943    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0819 10:27:34.243675    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0819 10:27:34.243690    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mkeb1eac7ee8b3901067565b7ff883710f2d1088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244061    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key ...
	I0819 10:27:34.244069    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key: {Name:mkc1afcd7a6a9a572716155e33c32e7def81650b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244312    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:27:34.244340    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:27:34.244378    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:27:34.244398    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:27:34.244416    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:27:34.244448    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:27:34.244486    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:27:34.244521    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:27:34.244615    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:27:34.244666    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:27:34.244675    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:27:34.244748    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:27:34.244776    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:27:34.244831    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:27:34.244909    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:34.244942    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.244990    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.245007    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.245522    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:27:34.267677    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:27:34.287348    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:27:34.309971    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:27:34.330910    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 10:27:34.350036    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:27:34.370663    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:27:34.390457    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:27:34.410226    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:27:34.431025    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:27:34.451232    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:27:34.471133    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:27:34.487758    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:27:34.493769    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:27:34.506308    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511941    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511996    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.519851    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:27:34.531120    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:27:34.540803    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544302    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544341    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.548724    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:27:34.558817    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:27:34.568088    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571692    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571731    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.575999    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:27:34.585057    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:27:34.588207    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:27:34.588251    4789 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:34.588345    4789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:27:34.601241    4789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:27:34.609838    4789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:27:34.618794    4789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:27:34.627200    4789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:27:34.627208    4789 kubeadm.go:157] found existing configuration files:
	
	I0819 10:27:34.627243    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:27:34.635162    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:27:34.635198    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:27:34.643336    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:27:34.651247    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:27:34.651280    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:27:34.659346    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.667240    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:27:34.667281    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.675386    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:27:34.684053    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:27:34.684105    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:27:34.692357    4789 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 10:27:34.751991    4789 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:27:34.752160    4789 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:27:34.833970    4789 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:27:34.834062    4789 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:27:34.834153    4789 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:27:34.842513    4789 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:27:34.863067    4789 out.go:235]   - Generating certificates and keys ...
	I0819 10:27:34.863126    4789 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:27:34.863179    4789 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:27:35.003012    4789 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:27:35.766829    4789 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:27:35.976153    4789 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:27:36.134850    4789 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:27:36.228947    4789 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:27:36.229166    4789 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.375842    4789 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:27:36.375934    4789 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.597289    4789 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:27:36.907219    4789 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:27:37.426404    4789 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:27:37.426585    4789 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:27:37.566387    4789 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:27:38.000620    4789 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:27:38.121335    4789 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:27:38.179042    4789 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:27:38.231270    4789 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:27:38.231752    4789 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:27:38.233818    4789 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:27:38.255454    4789 out.go:235]   - Booting up control plane ...
	I0819 10:27:38.255535    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:27:38.255605    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:27:38.255655    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:27:38.255734    4789 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:27:38.255809    4789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:27:38.255842    4789 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:27:38.364951    4789 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:27:38.365069    4789 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:27:39.366309    4789 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001984632s
	I0819 10:27:39.366388    4789 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:27:45.029099    4789 kubeadm.go:310] [api-check] The API server is healthy after 5.666724975s
	I0819 10:27:45.039440    4789 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:27:45.046481    4789 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:27:45.059797    4789 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:27:45.059959    4789 kubeadm.go:310] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:27:45.067482    4789 kubeadm.go:310] [bootstrap-token] Using token: rrr6yu.ivgebthw63l7ehzv
	I0819 10:27:45.106820    4789 out.go:235]   - Configuring RBAC rules ...
	I0819 10:27:45.107004    4789 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:27:45.110638    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:27:45.151902    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:27:45.154406    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:27:45.156223    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:27:45.158190    4789 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:27:45.434935    4789 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:27:45.846068    4789 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:27:46.434136    4789 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:27:46.434675    4789 kubeadm.go:310] 
	I0819 10:27:46.434724    4789 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:27:46.434728    4789 kubeadm.go:310] 
	I0819 10:27:46.434798    4789 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:27:46.434808    4789 kubeadm.go:310] 
	I0819 10:27:46.434829    4789 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:27:46.434881    4789 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:27:46.434925    4789 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:27:46.434930    4789 kubeadm.go:310] 
	I0819 10:27:46.434974    4789 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:27:46.434984    4789 kubeadm.go:310] 
	I0819 10:27:46.435035    4789 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:27:46.435041    4789 kubeadm.go:310] 
	I0819 10:27:46.435080    4789 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:27:46.435139    4789 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:27:46.435197    4789 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:27:46.435204    4789 kubeadm.go:310] 
	I0819 10:27:46.435268    4789 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:27:46.435333    4789 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:27:46.435337    4789 kubeadm.go:310] 
	I0819 10:27:46.435410    4789 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435498    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd \
	I0819 10:27:46.435515    4789 kubeadm.go:310] 	--control-plane 
	I0819 10:27:46.435520    4789 kubeadm.go:310] 
	I0819 10:27:46.435589    4789 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:27:46.435594    4789 kubeadm.go:310] 
	I0819 10:27:46.435664    4789 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435746    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd 
	I0819 10:27:46.435997    4789 kubeadm.go:310] W0819 17:27:34.545490    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436229    4789 kubeadm.go:310] W0819 17:27:34.546600    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436316    4789 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:27:46.436331    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:46.436337    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:46.458203    4789 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 10:27:46.517773    4789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 10:27:46.523858    4789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 10:27:46.523872    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 10:27:46.539513    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 10:27:46.759807    4789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:27:46.759878    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:46.759883    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0819 10:27:46.777623    4789 ops.go:34] apiserver oom_adj: -16
	I0819 10:27:46.926523    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.427175    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.927281    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.428033    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.926686    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.426608    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.926666    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:50.010199    4789 kubeadm.go:1113] duration metric: took 3.25030545s to wait for elevateKubeSystemPrivileges
	I0819 10:27:50.010216    4789 kubeadm.go:394] duration metric: took 15.42163041s to StartCluster
	I0819 10:27:50.010227    4789 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.010325    4789 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.010762    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.011021    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:27:50.011033    4789 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.011050    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:27:50.011076    4789 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:27:50.011116    4789 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0819 10:27:50.011120    4789 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0819 10:27:50.011148    4789 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0819 10:27:50.011152    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.011155    4789 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0819 10:27:50.011186    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.011415    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011420    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011430    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.011431    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.020667    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51134
	I0819 10:27:50.021171    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021230    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51136
	I0819 10:27:50.021523    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021533    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.021634    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021753    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.021940    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.022115    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.022146    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.022229    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.022806    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.022988    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.023051    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.024924    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.025156    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:27:50.025529    4789 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:27:50.025699    4789 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0819 10:27:50.025720    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.025937    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.025963    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.031229    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51138
	I0819 10:27:50.031604    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.031942    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.031953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.032154    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.032270    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.032358    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.032435    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.033436    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.034958    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51140
	I0819 10:27:50.035269    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.035586    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.035596    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.035796    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.036148    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.036165    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.044937    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51142
	I0819 10:27:50.045312    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.045667    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.045680    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.045893    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.045996    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.046077    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.046151    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.047102    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.047225    4789 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.047234    4789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:27:50.047243    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.047325    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.047417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.047495    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.047571    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.056055    4789 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:27:50.076134    4789 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.076146    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:27:50.076163    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.076310    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.076417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.076556    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.076664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.113554    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:27:50.127003    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.262022    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.488277    4789 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0819 10:27:50.488318    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488327    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488534    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488547    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488556    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488563    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488564    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488681    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488704    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488718    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488767    4789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:27:50.488780    4789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:27:50.488862    4789 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 10:27:50.488867    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.488877    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.488882    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495057    4789 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:27:50.495477    4789 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 10:27:50.495484    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.495490    4789 round_trippers.go:473]     Content-Type: application/json
	I0819 10:27:50.495494    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.498504    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:27:50.498632    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.498641    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.498797    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.498806    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.498814    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649595    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649607    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.649833    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.649843    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649848    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.649874    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649893    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.650019    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.650028    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.650044    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.673040    4789 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 10:27:50.709732    4789 addons.go:510] duration metric: took 698.654107ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 10:27:50.709774    4789 start.go:246] waiting for cluster config update ...
	I0819 10:27:50.709799    4789 start.go:255] writing updated cluster config ...
	I0819 10:27:50.746763    4789 out.go:201] 
	I0819 10:27:50.768467    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.768565    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.790908    4789 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:27:50.832651    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:50.832673    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:50.832790    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:50.832801    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:50.832852    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.833261    4789 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:50.833314    4789 start.go:364] duration metric: took 41.162µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:27:50.833329    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.833382    4789 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0819 10:27:50.854688    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:50.854833    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.854870    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.864309    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51147
	I0819 10:27:50.864640    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.864951    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.864963    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.865175    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.865294    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:27:50.865374    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:27:50.865472    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:50.865485    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:50.865515    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:50.865553    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865565    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865607    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:50.865634    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865649    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865666    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:50.865676    4789 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0819 10:27:50.865754    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.865776    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:27:50.891966    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:50.891987    4789 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0819 10:27:50.892145    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.892330    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:50.892137    4845 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:50.892421    4789 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:51.078705    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.078630    4845 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa...
	I0819 10:27:51.171843    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.171751    4845 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0819 10:27:51.171860    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0819 10:27:51.171868    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0819 10:27:51.172685    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.172591    4845 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02 ...
	I0819 10:27:51.544884    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.544910    4789 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:27:51.544922    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:27:51.571631    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:27:51.571653    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:51.571680    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571706    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571739    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:51.571767    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:51.571780    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:51.574668    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Pid is 4850
	I0819 10:27:51.575734    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:27:51.575757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.575783    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:51.576702    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:51.576759    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:51.576778    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:51.576816    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:51.576830    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:51.576844    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:51.582262    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:51.590515    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:51.591362    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:51.591388    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:51.591397    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:51.591407    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:51.978930    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:51.978947    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:52.094059    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:52.094091    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:52.094127    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:52.094142    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:52.094869    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:52.094879    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:53.577521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0819 10:27:53.577541    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:53.577636    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:53.578446    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:53.578461    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:53.578472    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:53.578481    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:53.578489    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:53.578507    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:55.579485    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0819 10:27:55.579501    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:55.579576    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:55.580358    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:55.580387    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:55.580414    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:55.580426    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:55.580434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:55.580442    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.581588    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0819 10:27:57.581603    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:57.581681    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:57.582486    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:57.582510    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:57.582521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:57.582530    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:57.582540    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:57.582548    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.680321    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:27:57.680434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:27:57.680445    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:27:57.704982    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:27:59.583757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0819 10:27:59.583772    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:59.583842    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:59.584652    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:59.584696    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:59.584710    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:59.584720    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:59.584729    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:59.584737    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:28:01.585137    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0819 10:28:01.585154    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.585235    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.585996    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:28:01.586042    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:28:01.586055    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:28:01.586080    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:28:01.586086    4789 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:28:01.586098    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:01.586694    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586794    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586889    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:28:01.586896    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:28:01.586980    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.587029    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.587790    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:28:01.587796    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:28:01.587800    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:28:01.587804    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:01.587881    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:01.587956    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588060    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588138    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:01.588256    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:01.588435    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:01.588443    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:28:02.645180    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.645193    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:28:02.645198    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.645326    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.645422    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645501    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645583    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.645718    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.645869    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.645877    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:28:02.700961    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:28:02.700992    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:28:02.700998    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:28:02.701003    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701132    4789 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:28:02.701143    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701237    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.701327    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.701424    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701502    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701588    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.701720    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.701855    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.701864    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:28:02.773500    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:28:02.773515    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.773649    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.773737    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773840    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773945    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.774071    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.774226    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.774237    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:28:02.838956    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.838971    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:28:02.838984    4789 buildroot.go:174] setting up certificates
	I0819 10:28:02.838992    4789 provision.go:84] configureAuth start
	I0819 10:28:02.838998    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.839135    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:02.839223    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.839322    4789 provision.go:143] copyHostCerts
	I0819 10:28:02.839347    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839393    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:28:02.839399    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839532    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:28:02.839738    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839769    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:28:02.839774    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839845    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:28:02.839992    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840021    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:28:02.840025    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840090    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:28:02.840244    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:28:02.878856    4789 provision.go:177] copyRemoteCerts
	I0819 10:28:02.878899    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:28:02.878912    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.879041    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.879132    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.879231    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.879330    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:02.914748    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:28:02.914819    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:28:02.934608    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:28:02.934673    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:28:02.954833    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:28:02.954900    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:28:02.974652    4789 provision.go:87] duration metric: took 135.649275ms to configureAuth
	I0819 10:28:02.974666    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:28:02.974809    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:02.974823    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:02.974958    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.975063    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.975147    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975219    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975328    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.975454    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.975601    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.975609    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:28:03.033628    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:28:03.033639    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:28:03.033715    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:28:03.033730    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.033861    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.033950    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034053    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034140    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.034264    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.034412    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.034459    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:28:03.102644    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:28:03.102663    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.102811    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.102898    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.102999    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.103120    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.103244    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.103390    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.103404    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:28:04.637367    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:28:04.637381    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:28:04.637388    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0819 10:28:04.637524    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:28:04.637530    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:28:04.637534    4789 client.go:171] duration metric: took 13.771742286s to LocalClient.Create
	I0819 10:28:04.637544    4789 start.go:167] duration metric: took 13.771771513s to libmachine.API.Create "ha-431000"
	I0819 10:28:04.637550    4789 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:28:04.637557    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:28:04.637566    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.637712    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:28:04.637723    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.637834    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.637926    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.638026    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.638127    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.678475    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:28:04.682965    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:28:04.682980    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:28:04.683079    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:28:04.683246    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:28:04.683253    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:28:04.683434    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:28:04.695086    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:04.723279    4789 start.go:296] duration metric: took 85.720185ms for postStartSetup
	I0819 10:28:04.723311    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:04.723943    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.724123    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:28:04.724446    4789 start.go:128] duration metric: took 13.890752069s to createHost
	I0819 10:28:04.724460    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.724558    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.724679    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724786    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724871    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.724979    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:04.725097    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:04.725103    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:28:04.784682    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088484.852271103
	
	I0819 10:28:04.784694    4789 fix.go:216] guest clock: 1724088484.852271103
	I0819 10:28:04.784698    4789 fix.go:229] Guest: 2024-08-19 10:28:04.852271103 -0700 PDT Remote: 2024-08-19 10:28:04.724454 -0700 PDT m=+55.319126445 (delta=127.817103ms)
	I0819 10:28:04.784725    4789 fix.go:200] guest clock delta is within tolerance: 127.817103ms
	I0819 10:28:04.784731    4789 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.951104834s
	I0819 10:28:04.784750    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.784884    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.807240    4789 out.go:177] * Found network options:
	I0819 10:28:04.829600    4789 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:28:04.851548    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.851607    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852495    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852747    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852876    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:28:04.852915    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:28:04.852962    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.853080    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:28:04.853100    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.853127    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853372    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853394    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853596    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853633    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853742    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853804    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.853880    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:28:04.886788    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:28:04.886847    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:28:04.931189    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:28:04.931209    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:04.931315    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:04.947443    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:28:04.955693    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:28:04.964155    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:28:04.964197    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:28:04.972493    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.980548    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:28:04.988709    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.996856    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:28:05.005271    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:28:05.013575    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:28:05.021801    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:28:05.030285    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:28:05.037842    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:28:05.045332    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.140730    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:28:05.159555    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:05.159625    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:28:05.177222    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.189624    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:28:05.203743    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.214606    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.224836    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:28:05.249649    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.261132    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:05.276191    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:28:05.279129    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:28:05.287175    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:28:05.300748    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:28:05.396444    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:28:05.505778    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:28:05.505805    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:28:05.520914    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.616215    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:28:07.911303    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.295016426s)
	I0819 10:28:07.911366    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:28:07.923467    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:28:07.938312    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:07.949283    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:28:08.046922    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:28:08.152880    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.256594    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:28:08.271339    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:08.283089    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.384798    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:28:08.441813    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:28:08.441881    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:28:08.446421    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:28:08.446473    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:28:08.449807    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:28:08.479621    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:28:08.479690    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.496571    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.537488    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:28:08.579078    4789 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:28:08.603340    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:08.603786    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:28:08.608372    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:08.618166    4789 mustload.go:65] Loading cluster: ha-431000
	I0819 10:28:08.618314    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:08.618533    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.618549    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.627122    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51170
	I0819 10:28:08.627459    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.627845    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.627857    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.628097    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.628239    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:28:08.628342    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:08.628430    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:28:08.629353    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:08.629592    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.629608    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.638041    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51172
	I0819 10:28:08.638388    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.638753    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.638770    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.638992    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.639108    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:08.639209    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:28:08.639216    4789 certs.go:194] generating shared ca certs ...
	I0819 10:28:08.639225    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.639357    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:28:08.639425    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:28:08.639434    4789 certs.go:256] generating profile certs ...
	I0819 10:28:08.639538    4789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:28:08.639562    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788
	I0819 10:28:08.639575    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0819 10:28:08.693749    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 ...
	I0819 10:28:08.693766    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788: {Name:mkade16cb35e521e9e55fc42d7cb129c8b94b782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694149    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 ...
	I0819 10:28:08.694160    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788: {Name:mkeae0a28d48da45f84299952289f15db5f944f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694378    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:28:08.694703    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:28:08.694954    4789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:28:08.694964    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:28:08.694987    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:28:08.695006    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:28:08.695024    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:28:08.695042    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:28:08.695060    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:28:08.695078    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:28:08.695096    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:28:08.695175    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:28:08.695213    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:28:08.695228    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:28:08.695261    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:28:08.695290    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:28:08.695321    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:28:08.695400    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:08.695438    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:28:08.695462    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:28:08.695482    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:08.695511    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:08.695664    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:08.695745    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:08.695845    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:08.695925    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:08.729193    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:28:08.736181    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:28:08.748665    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:28:08.751826    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:28:08.773481    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:28:08.777252    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:28:08.787546    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:28:08.791015    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:28:08.800105    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:28:08.803218    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:28:08.812240    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:28:08.815351    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:28:08.824083    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:28:08.844052    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:28:08.864107    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:28:08.884612    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:28:08.904284    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 10:28:08.924397    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:28:08.944026    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:28:08.964689    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:28:08.984934    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:28:09.004413    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:28:09.024043    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:28:09.043924    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:28:09.058066    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:28:09.071585    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:28:09.085080    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:28:09.098536    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:28:09.112048    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:28:09.125242    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:28:09.139717    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:28:09.144032    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:28:09.152602    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.155967    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.156009    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.160192    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:28:09.168568    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:28:09.176997    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180533    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180568    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.184799    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:28:09.193356    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:28:09.201811    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205453    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205494    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.209760    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:28:09.218392    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:28:09.222392    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:28:09.222437    4789 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:28:09.222498    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:28:09.222516    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:28:09.222559    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:28:09.234408    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:28:09.234452    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:28:09.234506    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.242939    4789 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:28:09.242994    4789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0819 10:28:09.251336    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 10:28:11.797289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:28:11.809069    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.809192    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.812267    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:28:11.812291    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:28:12.469259    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.469340    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.472845    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:28:12.472869    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:28:13.348737    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.348820    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.352429    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:28:13.352449    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:28:13.542994    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:28:13.550937    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:28:13.564187    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:28:13.577654    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:28:13.591433    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:28:13.594347    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:13.604347    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:13.710422    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:13.730131    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:13.730407    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:13.730448    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:13.739474    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51199
	I0819 10:28:13.739816    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:13.740174    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:13.740190    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:13.740438    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:13.740564    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:13.740661    4789 start.go:317] joinCluster: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:28:13.740750    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 10:28:13.740767    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:13.740857    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:13.740939    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:13.741027    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:13.741101    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:13.815525    4789 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:13.815563    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0819 10:28:41.108330    4789 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (27.292143754s)
	I0819 10:28:41.108351    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 10:28:41.504714    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000-m02 minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=false
	I0819 10:28:41.585348    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-431000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 10:28:41.693283    4789 start.go:319] duration metric: took 27.951997328s to joinCluster
	I0819 10:28:41.693326    4789 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:41.693537    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:41.715528    4789 out.go:177] * Verifying Kubernetes components...
	I0819 10:28:41.790354    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:41.995139    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:42.017369    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:28:42.017608    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:28:42.017650    4789 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:28:42.017827    4789 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:42.017919    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.017925    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.017930    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.017935    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.025432    4789 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 10:28:42.518902    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.518917    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.518923    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.518927    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.521742    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:43.018396    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.018411    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.018417    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.018421    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.021454    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:43.518031    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.518083    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.518106    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.518116    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.522999    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:44.018193    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.018219    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.018231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.018237    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.021854    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:44.022387    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:44.518152    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.518189    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.518196    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.518199    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.520027    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.019772    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.019792    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.019799    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.019803    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.021628    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.518039    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.518053    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.518059    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.518064    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.520113    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:46.018198    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.018232    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.018239    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.018243    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.020136    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.518474    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.518490    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.518496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.518499    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.520505    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.520916    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:47.019124    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.019150    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.019162    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.022729    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:47.518316    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.518341    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.518351    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.518356    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.520471    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:48.019594    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.019620    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.019630    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.019636    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.023447    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:48.518492    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.518526    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.518583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.518593    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.523421    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:48.523787    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:49.019217    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.019242    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.019254    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.019260    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.022862    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:49.520299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.520324    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.520337    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.520342    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.523532    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.019383    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.019412    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.019424    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.019430    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.022847    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.519489    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.519503    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.519511    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.519515    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.522131    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:51.019130    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.019153    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.019163    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.022497    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:51.022894    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:51.518391    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.518448    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.518465    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.518476    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.521848    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.019014    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.019045    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.019103    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.019117    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.022339    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.519630    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.519644    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.519651    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.519655    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.522019    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.018435    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.018460    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.018472    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.018480    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.021850    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:53.518299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.518340    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.518349    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.518355    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.520795    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.521268    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:54.020380    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.020406    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.020418    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.020423    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.024178    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:54.519346    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.519364    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.519383    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.519387    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.521155    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:55.020400    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.020425    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.020437    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.020444    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.024326    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:55.519229    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.519245    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.519264    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.519268    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.521435    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:55.521852    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:56.019678    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.019703    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.019714    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.019719    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.023317    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:56.518539    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.518563    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.518576    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.518581    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.521781    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.020424    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.020449    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.020460    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.020465    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.024114    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.519399    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.519428    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.519468    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.519475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.522788    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.523223    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:58.018734    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.018759    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.018770    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.018777    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.022242    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.518348    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.518359    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.518371    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.518375    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.522907    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.523168    4789 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:28:58.523182    4789 node_ready.go:38] duration metric: took 16.504973252s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:58.523189    4789 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:28:58.523237    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:28:58.523243    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.523249    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.523253    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.528083    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.532699    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.532761    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:28:58.532768    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.532774    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.532776    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.535978    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.536344    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.536351    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.536358    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.536361    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.538061    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.538368    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.538377    4789 pod_ready.go:82] duration metric: took 5.660556ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538383    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538413    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:28:58.538417    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.538423    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.538428    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.540013    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.540457    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.540465    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.540471    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.540475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.542120    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.542393    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.542400    4789 pod_ready.go:82] duration metric: took 4.011453ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542406    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542439    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:28:58.542444    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.542449    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.542454    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.543986    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.544340    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.544347    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.544353    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.544356    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.545868    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.546173    4789 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.546181    4789 pod_ready.go:82] duration metric: took 3.769725ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546187    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546221    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:28:58.546226    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.546231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.546234    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.547638    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.548110    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.548118    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.548123    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.548127    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.549514    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.549853    4789 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.549860    4789 pod_ready.go:82] duration metric: took 3.668598ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.549868    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.718822    4789 request.go:632] Waited for 168.888912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718861    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718867    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.718872    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.718876    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.721032    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:58.919673    4789 request.go:632] Waited for 198.011193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919731    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919740    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.919750    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.919807    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.923236    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.923670    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.923682    4789 pod_ready.go:82] duration metric: took 373.799986ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.923691    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.119399    4789 request.go:632] Waited for 195.629207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119559    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119572    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.119583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.119589    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.122804    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.318619    4789 request.go:632] Waited for 195.030736ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318674    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318695    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.318702    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.318705    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.320812    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.321165    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.321173    4789 pod_ready.go:82] duration metric: took 397.466691ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.321180    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.520541    4789 request.go:632] Waited for 199.292765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520642    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520652    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.520663    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.520672    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.524463    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.718728    4789 request.go:632] Waited for 192.615056ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718803    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718811    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.718818    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.718823    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.720955    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.721397    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.721407    4789 pod_ready.go:82] duration metric: took 400.213219ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.721415    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.918907    4789 request.go:632] Waited for 197.434904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919004    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919014    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.919024    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.919030    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.922451    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.119192    4789 request.go:632] Waited for 196.220574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119263    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119272    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.119286    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.119297    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.122630    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.122957    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.122968    4789 pod_ready.go:82] duration metric: took 401.538458ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.122977    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.320524    4789 request.go:632] Waited for 197.475989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320660    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320672    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.320681    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.320689    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.323985    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.519403    4789 request.go:632] Waited for 194.628597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519535    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519546    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.519560    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.519568    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.523121    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.523435    4789 pod_ready.go:93] pod "kube-proxy-5h7j2" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.523449    4789 pod_ready.go:82] duration metric: took 400.456993ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.523457    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.718666    4789 request.go:632] Waited for 195.15054ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718742    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718752    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.718786    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.718800    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.721920    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.918782    4789 request.go:632] Waited for 196.40919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918873    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918882    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.918896    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.918906    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.922355    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.922815    4789 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.922824    4789 pod_ready.go:82] duration metric: took 399.351509ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.922830    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.118854    4789 request.go:632] Waited for 195.977175ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118950    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118965    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.118981    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.118987    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.122683    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.318886    4789 request.go:632] Waited for 195.887859ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319029    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319042    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.319053    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.319063    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.322689    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.323187    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.323200    4789 pod_ready.go:82] duration metric: took 400.355182ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.323208    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.518928    4789 request.go:632] Waited for 195.662505ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519043    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519057    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.519070    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.519077    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.522736    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.718819    4789 request.go:632] Waited for 195.65197ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718885    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718891    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.718899    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.718905    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.721246    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:29:01.721682    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.721691    4789 pod_ready.go:82] duration metric: took 398.467113ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.721701    4789 pod_ready.go:39] duration metric: took 3.198431164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:29:01.721718    4789 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:29:01.721774    4789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:29:01.735634    4789 api_server.go:72] duration metric: took 20.041851081s to wait for apiserver process to appear ...
	I0819 10:29:01.735647    4789 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:29:01.735663    4789 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:29:01.738815    4789 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:29:01.738848    4789 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:29:01.738854    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.738860    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.738864    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.739526    4789 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:29:01.739580    4789 api_server.go:141] control plane version: v1.31.0
	I0819 10:29:01.739589    4789 api_server.go:131] duration metric: took 3.937962ms to wait for apiserver health ...
	I0819 10:29:01.739594    4789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:29:01.918638    4789 request.go:632] Waited for 178.995687ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918733    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918745    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.918757    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.918762    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.922864    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:01.926606    4789 system_pods.go:59] 17 kube-system pods found
	I0819 10:29:01.926628    4789 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:01.926633    4789 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:01.926636    4789 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:01.926640    4789 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:01.926642    4789 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:01.926645    4789 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:01.926647    4789 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:01.926650    4789 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:01.926653    4789 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:01.926656    4789 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:01.926659    4789 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:01.926666    4789 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:01.926669    4789 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:01.926672    4789 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:01.926674    4789 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:01.926677    4789 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:01.926680    4789 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:01.926684    4789 system_pods.go:74] duration metric: took 187.080965ms to wait for pod list to return data ...
	I0819 10:29:01.926689    4789 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:29:02.119406    4789 request.go:632] Waited for 192.625822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119507    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119517    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.119528    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.119535    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.123120    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.123283    4789 default_sa.go:45] found service account: "default"
	I0819 10:29:02.123293    4789 default_sa.go:55] duration metric: took 196.595366ms for default service account to be created ...
	I0819 10:29:02.123300    4789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:29:02.319795    4789 request.go:632] Waited for 196.43255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319928    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319939    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.319947    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.319954    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.324586    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:02.328058    4789 system_pods.go:86] 17 kube-system pods found
	I0819 10:29:02.328071    4789 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:02.328075    4789 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:02.328078    4789 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:02.328083    4789 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:02.328086    4789 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:02.328088    4789 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:02.328091    4789 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:02.328093    4789 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:02.328096    4789 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:02.328098    4789 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:02.328101    4789 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:02.328103    4789 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:02.328106    4789 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:02.328109    4789 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:02.328112    4789 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:02.328115    4789 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:02.328117    4789 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:02.328122    4789 system_pods.go:126] duration metric: took 204.813151ms to wait for k8s-apps to be running ...
	I0819 10:29:02.328133    4789 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:29:02.328183    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:29:02.340002    4789 system_svc.go:56] duration metric: took 11.865981ms WaitForService to wait for kubelet
	I0819 10:29:02.340017    4789 kubeadm.go:582] duration metric: took 20.646222268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:29:02.340034    4789 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:29:02.518831    4789 request.go:632] Waited for 178.726274ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518969    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518980    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.518991    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.518998    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.522659    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.523326    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523339    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523348    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523351    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523354    4789 node_conditions.go:105] duration metric: took 183.311856ms to run NodePressure ...
	I0819 10:29:02.523361    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:29:02.523378    4789 start.go:255] writing updated cluster config ...
	I0819 10:29:02.544110    4789 out.go:201] 
	I0819 10:29:02.566227    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:02.566358    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.588965    4789 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:29:02.630777    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:29:02.630803    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:29:02.630953    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:29:02.630966    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:29:02.631053    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.631767    4789 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:29:02.631849    4789 start.go:364] duration metric: took 64.609µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:29:02.631869    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:29:02.631978    4789 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0819 10:29:02.652968    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:29:02.653116    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:29:02.653158    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:29:02.663539    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51204
	I0819 10:29:02.663925    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:29:02.664263    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:29:02.664277    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:29:02.664539    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:29:02.664672    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:02.664758    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:02.664867    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:29:02.664899    4789 client.go:168] LocalClient.Create starting
	I0819 10:29:02.664932    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:29:02.664992    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665005    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665051    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:29:02.665087    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665103    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665116    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:29:02.665122    4789 main.go:141] libmachine: (ha-431000-m03) Calling .PreCreateCheck
	I0819 10:29:02.665218    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.665228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:02.674109    4789 main.go:141] libmachine: Creating machine...
	I0819 10:29:02.674126    4789 main.go:141] libmachine: (ha-431000-m03) Calling .Create
	I0819 10:29:02.674302    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.674550    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.674293    4918 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:29:02.674675    4789 main.go:141] libmachine: (ha-431000-m03) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:29:02.956098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.955977    4918 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa...
	I0819 10:29:03.041212    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.041121    4918 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk...
	I0819 10:29:03.041230    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing magic tar header
	I0819 10:29:03.041239    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing SSH key tar header
	I0819 10:29:03.042098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.042003    4918 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03 ...
	I0819 10:29:03.582755    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.582783    4789 main.go:141] libmachine: (ha-431000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:29:03.582846    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:29:03.618942    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:29:03.618960    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:29:03.619021    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619049    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619085    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:29:03.619116    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:29:03.619133    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:29:03.621990    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Pid is 4921
	I0819 10:29:03.622461    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:29:03.622497    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.622585    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:03.623424    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:03.623486    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:03.623500    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:03.623537    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:03.623548    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:03.623558    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:03.623568    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:03.629643    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:29:03.638725    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:29:03.639577    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:03.639599    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:03.639609    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:03.639622    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.022361    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:29:04.022375    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:29:04.137228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:04.137262    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:04.137274    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:04.137284    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.138001    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:29:04.138016    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:29:05.623879    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 1
	I0819 10:29:05.623896    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:05.624023    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:05.624809    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:05.624873    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:05.624888    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:05.624904    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:05.624917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:05.624926    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:05.624935    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:07.626679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 2
	I0819 10:29:07.626696    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:07.626779    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:07.627539    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:07.627582    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:07.627592    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:07.627610    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:07.627619    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:07.627626    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:07.627635    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.627812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 3
	I0819 10:29:09.627828    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:09.627917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:09.628679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:09.628746    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:09.628777    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:09.628791    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:09.628799    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:09.628806    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:09.628812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.722721    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:29:09.722792    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:29:09.722802    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:29:09.745848    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:29:11.630390    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 4
	I0819 10:29:11.630407    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:11.630495    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:11.631275    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:11.631321    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:11.631331    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:11.631340    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:11.631359    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:11.631366    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:11.631387    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:13.633236    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 5
	I0819 10:29:13.633251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.633339    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.634147    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:13.634209    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0819 10:29:13.634221    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:29:13.634228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:29:13.634232    4789 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:29:13.634299    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:13.634943    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635064    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635157    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:29:13.635165    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:29:13.635251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.635310    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.636120    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:29:13.636129    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:29:13.636133    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:29:13.636138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:13.636228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:13.636309    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636392    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636477    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:13.636587    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:13.636755    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:13.636763    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:29:14.697546    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.697558    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:29:14.697564    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.697702    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.697798    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.697887    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.698009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.698168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.698318    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.698326    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:29:14.765778    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:29:14.765827    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:29:14.765833    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:29:14.765839    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.765977    4789 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:29:14.765988    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.766081    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.766185    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.766270    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766369    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766481    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.766635    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.766783    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.766792    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:29:14.841753    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:29:14.841769    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.841901    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.842009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842101    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842195    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.842324    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.842477    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.842489    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:29:14.911764    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.911779    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:29:14.911793    4789 buildroot.go:174] setting up certificates
	I0819 10:29:14.911800    4789 provision.go:84] configureAuth start
	I0819 10:29:14.911807    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.911942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:14.912037    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.912110    4789 provision.go:143] copyHostCerts
	I0819 10:29:14.912141    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912187    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:29:14.912193    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912326    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:29:14.912504    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912534    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:29:14.912539    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912651    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:29:14.912808    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912854    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:29:14.912859    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912935    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:29:14.913083    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:29:15.064390    4789 provision.go:177] copyRemoteCerts
	I0819 10:29:15.064440    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:29:15.064455    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.064599    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.064695    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.064786    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.064886    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:15.103656    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:29:15.103727    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:29:15.123430    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:29:15.123497    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:29:15.143265    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:29:15.143333    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:29:15.162885    4789 provision.go:87] duration metric: took 251.064942ms to configureAuth
	I0819 10:29:15.162900    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:29:15.163052    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:15.163065    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:15.163221    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.163329    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.163417    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163506    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.163693    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.163824    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.163831    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:29:15.225270    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:29:15.225286    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:29:15.225356    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:29:15.225368    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.225510    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.225619    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225708    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225810    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.225948    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.226090    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.226134    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:29:15.299640    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:29:15.299658    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.299797    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.299889    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.299978    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.300067    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.300202    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.300355    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.300368    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:29:16.819930    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:29:16.819945    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:29:16.819953    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetURL
	I0819 10:29:16.820095    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:29:16.820107    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:29:16.820113    4789 client.go:171] duration metric: took 14.154897138s to LocalClient.Create
	I0819 10:29:16.820124    4789 start.go:167] duration metric: took 14.154947877s to libmachine.API.Create "ha-431000"
	I0819 10:29:16.820129    4789 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:29:16.820136    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:29:16.820145    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.820288    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:29:16.820301    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.820396    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.820494    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.820582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.820664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:16.862693    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:29:16.866416    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:29:16.866431    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:29:16.866540    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:29:16.866725    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:29:16.866732    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:29:16.866944    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:29:16.874578    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:29:16.904910    4789 start.go:296] duration metric: took 84.771069ms for postStartSetup
	I0819 10:29:16.904942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:16.905569    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.905740    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:16.906122    4789 start.go:128] duration metric: took 14.273822612s to createHost
	I0819 10:29:16.906138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.906230    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.906303    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906387    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906475    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.906573    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:16.906690    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:16.906697    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:29:16.969389    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088556.958185685
	
	I0819 10:29:16.969401    4789 fix.go:216] guest clock: 1724088556.958185685
	I0819 10:29:16.969406    4789 fix.go:229] Guest: 2024-08-19 10:29:16.958185685 -0700 PDT Remote: 2024-08-19 10:29:16.906131 -0700 PDT m=+127.499217490 (delta=52.054685ms)
	I0819 10:29:16.969416    4789 fix.go:200] guest clock delta is within tolerance: 52.054685ms
	I0819 10:29:16.969419    4789 start.go:83] releasing machines lock for "ha-431000-m03", held for 14.337247496s
	I0819 10:29:16.969437    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.969573    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.992258    4789 out.go:177] * Found network options:
	I0819 10:29:17.014265    4789 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:29:17.037508    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.037542    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.037561    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038432    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038682    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038835    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:29:17.038873    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:29:17.038922    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.038957    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.039067    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:29:17.039087    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:17.039116    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039298    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039332    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039497    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039590    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039679    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039721    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:17.039809    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:29:17.074320    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:29:17.074385    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:29:17.120302    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:29:17.120318    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.120398    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.135851    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:29:17.144402    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:29:17.152735    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.152784    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:29:17.161185    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.169599    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:29:17.177908    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.186319    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:29:17.194967    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:29:17.203702    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:29:17.212228    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:29:17.220632    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:29:17.228164    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:29:17.235717    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.329551    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:29:17.348829    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.348909    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:29:17.363903    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.374976    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:29:17.393061    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.404238    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.414728    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:29:17.438632    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.449143    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.464536    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:29:17.467445    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:29:17.474809    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:29:17.488421    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:29:17.581504    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:29:17.684960    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.684980    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:29:17.699658    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.803979    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:30:18.773891    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.968555005s)
	I0819 10:30:18.774012    4789 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:30:18.808676    4789 out.go:201] 
	W0819 10:30:18.829152    4789 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:29:15 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570013158Z" level=info msg="Starting up"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570447745Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.572542412Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.584880924Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603137975Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603181724Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603219390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603233227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603303033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603338653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603471354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603509282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603521199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603528665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603591360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603811486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605351283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605389063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605504861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605538594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605610859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605677674Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607907354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607976584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607991948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608010711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608023403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608093276Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608724366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608874333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608913351Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608929178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608943960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608968346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609006571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609021660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609032833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609044499Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609055485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609066063Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609088279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609103865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609115537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609130257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609139734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609151164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609161605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609173829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609185591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609200246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609211000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609224200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609237871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609251525Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609296616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609316285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609327369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609362155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609478815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609512436Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609530768Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609541857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609553085Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609563545Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610591556Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610680787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610769049Z" level=info msg="containerd successfully booted in 0.026402s"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.601341697Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.606766805Z" level=info msg="Loading containers: start."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.688780306Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.769433920Z" level=info msg="Loading containers: done."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776749571Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776865122Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.804822251Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.805010917Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:29:16 ha-431000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.814047535Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:29:17 ha-431000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815466623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815881336Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815956644Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.816022765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:18 ha-431000-m03 dockerd[921]: time="2024-08-19T17:29:18.853267859Z" level=info msg="Starting up"
	Aug 19 17:30:18 ha-431000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:30:18.829235    4789 out.go:270] * 
	W0819 10:30:18.830413    4789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:30:18.888275    4789 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:28:07 ha-431000 dockerd[1275]: time="2024-08-19T17:28:07.852612157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:07 ha-431000 dockerd[1275]: time="2024-08-19T17:28:07.852684377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:07 ha-431000 dockerd[1275]: time="2024-08-19T17:28:07.891805533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:07 ha-431000 dockerd[1275]: time="2024-08-19T17:28:07.891886204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:07 ha-431000 dockerd[1275]: time="2024-08-19T17:28:07.891898662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:07 ha-431000 dockerd[1275]: time="2024-08-19T17:28:07.891964144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:07 ha-431000 dockerd[1275]: time="2024-08-19T17:28:07.893388796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:07 ha-431000 dockerd[1275]: time="2024-08-19T17:28:07.893453899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:07 ha-431000 dockerd[1275]: time="2024-08-19T17:28:07.893466897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:07 ha-431000 dockerd[1275]: time="2024-08-19T17:28:07.893601828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:07 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3745c7f8fb9ffda1a9528dbab0743afd132acd46a2634643d4b5a24035dc2e4/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/868ee98671e833d733f787480bd37f293c8c6eb8b4092a75c7b96c7993f5f451/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/74fd2f09b011aa0f318ae4259efd3f3d52dc61d0bd78f032481d1a46763eeaae/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.132794637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133043856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133186443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133435141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139175494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139344496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139355701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139421519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157876304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157962624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157975535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.158198941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b9d1bccf00c94       cbb01a7bd410d                                                                                       2 minutes ago       Running             coredns                   0                   74fd2f09b011a       coredns-6f6b679f8f-hr2qx
	e7cacf032435f       6e38f40d628db                                                                                       2 minutes ago       Running             storage-provisioner       0                   868ee98671e83       storage-provisioner
	a3891ab602da5       cbb01a7bd410d                                                                                       2 minutes ago       Running             coredns                   0                   c3745c7f8fb9f       coredns-6f6b679f8f-vc76p
	37cd2e9ed2f34       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166            2 minutes ago       Running             kindnet-cni               0                   568b6f1ff9aaf       kindnet-lvdbg
	889ab608901bb       ad83b2ca7b09e                                                                                       2 minutes ago       Running             kube-proxy                0                   fde7b27c3d1a5       kube-proxy-5l56s
	ed733554ed160       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f   2 minutes ago       Running             kube-vip                  0                   90ec229d87c2c       kube-vip-ha-431000
	11d9cd3b2f49f       1766f54c897f0                                                                                       2 minutes ago       Running             kube-scheduler            0                   4c252909f338f       kube-scheduler-ha-431000
	262471364c991       604f5db92eaa8                                                                                       2 minutes ago       Running             kube-apiserver            0                   5a0fe916eaf1d       kube-apiserver-ha-431000
	39fe08877284d       2e96e5913fc06                                                                                       2 minutes ago       Running             etcd                      0                   fc30d54d1b565       etcd-ha-431000
	2801f8f44773b       045733566833c                                                                                       2 minutes ago       Running             kube-controller-manager   0                   80d21805f230b       kube-controller-manager-ha-431000
	
	
	==> coredns [a3891ab602da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40841 - 35632 "HINFO IN 8043641794425982319.4992720317295253252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008506209s
	
	
	==> coredns [b9d1bccf00c9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54195 - 29045 "HINFO IN 6513715404119561949.1799819676960271336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.007921235s
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:27:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:30:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:28:16 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:28:16 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:28:16 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:28:16 +0000   Mon, 19 Aug 2024 17:28:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7b5b85e2c64405f969f3e24eb671b2e
	  System UUID:                7f844fbb-0000-0000-b5d6-699bdfe1640c
	  Boot ID:                    cb211998-dc9c-4fd5-a169-3f6eeb2403fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-hr2qx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m31s
	  kube-system                 coredns-6f6b679f8f-vc76p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m31s
	  kube-system                 etcd-ha-431000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m35s
	  kube-system                 kindnet-lvdbg                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m31s
	  kube-system                 kube-apiserver-ha-431000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-controller-manager-ha-431000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-proxy-5l56s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-scheduler-ha-431000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-vip-ha-431000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m30s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  2m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m42s (x8 over 2m42s)  kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s (x8 over 2m42s)  kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m42s (x7 over 2m42s)  kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m35s                  kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s                  kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s                  kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m32s                  node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeReady                2m13s                  kubelet          Node ha-431000 status is now: NodeReady
	  Normal  RegisteredNode           94s                    node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	
	
	Name:               ha-431000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:28:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:30:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:29:09 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:29:09 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:29:09 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:29:09 +0000   Mon, 19 Aug 2024 17:28:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-431000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 21fb6f298fbf435c88fd6e9f9b50e04f
	  System UUID:                decf4e23-0000-0000-95db-084dbcc69753
	  Boot ID:                    330a7904-5229-4d07-9792-de118102386c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-431000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         99s
	  kube-system                 kindnet-qmgqd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      101s
	  kube-system                 kube-apiserver-ha-431000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-ha-431000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-5h7j2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-ha-431000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-vip-ha-431000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 97s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x7 over 101s)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                  node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  RegisteredNode           94s                  node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	
	
	==> dmesg <==
	[  +2.712596] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.230971] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.519395] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.106046] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.754357] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.260100] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.108326] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.116397] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.050322] kauditd_printk_skb: 139 callbacks suppressed
	[  +2.370658] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.100232] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.114416] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.133019] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.706453] systemd-fstab-generator[1261]: Ignoring "noauto" option for root device
	[  +0.055873] kauditd_printk_skb: 136 callbacks suppressed
	[  +2.542020] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +4.524199] systemd-fstab-generator[1651]: Ignoring "noauto" option for root device
	[  +0.058523] kauditd_printk_skb: 70 callbacks suppressed
	[  +7.145787] systemd-fstab-generator[2146]: Ignoring "noauto" option for root device
	[  +0.090131] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.001426] kauditd_printk_skb: 35 callbacks suppressed
	[Aug19 17:28] kauditd_printk_skb: 15 callbacks suppressed
	[ +36.695422] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [39fe08877284] <==
	{"level":"info","ts":"2024-08-19T17:27:40.693932Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:27:40.694654Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T17:27:40.694814Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T17:27:40.694850Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T17:28:39.576807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860) learners=(13991592590719088728)"}
	{"level":"info","ts":"2024-08-19T17:28:39.576958Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"c22c1f54a3cc7858","added-peer-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-19T17:28:39.577171Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577230Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577486Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577607Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577632Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858","remote-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-19T17:28:39.577678Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577764Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577976Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.578023Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.582369Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T17:28:40.582407Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.582418Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.596476Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.597370Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T17:28:40.597585Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.605913Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:41.107824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860 13991592590719088728)"}
	{"level":"info","ts":"2024-08-19T17:28:41.107895Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-19T17:28:41.107911Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"c22c1f54a3cc7858"}
	
	
	==> kernel <==
	 17:30:20 up 3 min,  0 users,  load average: 0.15, 0.15, 0.06
	Linux ha-431000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [37cd2e9ed2f3] <==
	I0819 17:29:13.912650       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:29:23.922788       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:29:23.922919       1 main.go:299] handling current node
	I0819 17:29:23.922961       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:29:23.923045       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:29:33.921158       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:29:33.921199       1 main.go:299] handling current node
	I0819 17:29:33.921211       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:29:33.921216       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:29:43.915497       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:29:43.915597       1 main.go:299] handling current node
	I0819 17:29:43.915627       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:29:43.915646       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:29:53.913214       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:29:53.913463       1 main.go:299] handling current node
	I0819 17:29:53.913596       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:29:53.913875       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:30:03.920224       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:30:03.920402       1 main.go:299] handling current node
	I0819 17:30:03.920480       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:30:03.920529       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:30:13.915605       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:30:13.915791       1 main.go:299] handling current node
	I0819 17:30:13.915901       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:30:13.916052       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [262471364c99] <==
	I0819 17:27:41.909312       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 17:27:41.909315       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 17:27:41.909319       1 cache.go:39] Caches are synced for autoregister controller
	I0819 17:27:41.910409       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 17:27:41.910442       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 17:27:41.910483       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 17:27:41.910619       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 17:27:41.911091       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 17:27:41.912463       1 controller.go:615] quota admission added evaluator for: namespaces
	I0819 17:27:41.974665       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 17:27:42.843862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0819 17:27:42.851035       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0819 17:27:42.851176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 17:27:43.131229       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 17:27:43.156609       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 17:27:43.228677       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 17:27:43.232630       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0819 17:27:43.233263       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:27:43.235521       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 17:27:43.816793       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 17:27:45.642805       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 17:27:45.648554       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 17:27:45.656204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 17:27:49.372173       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 17:27:49.521616       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2801f8f44773] <==
	I0819 17:28:08.581440       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0819 17:28:08.734333       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="93.874µs"
	I0819 17:28:08.765941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="10.221761ms"
	I0819 17:28:08.766196       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="107.868µs"
	I0819 17:28:08.798248       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="24.968573ms"
	I0819 17:28:08.798963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="21.08µs"
	I0819 17:28:16.172433       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000"
	I0819 17:28:39.461092       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-431000-m02\" does not exist"
	I0819 17:28:39.469657       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-431000-m02" podCIDRs=["10.244.1.0/24"]
	I0819 17:28:39.469800       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:39.469904       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:39.472651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:39.559701       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:41.186014       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:41.663171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:41.770811       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:43.587076       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-431000-m02"
	I0819 17:28:43.602726       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:46.812463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:46.910622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:49.488441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:58.619481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:58.630217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:29:01.828992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:29:09.962018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	
	
	==> kube-proxy [889ab608901b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:27:50.162614       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:27:50.171417       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0819 17:27:50.171450       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:27:50.239161       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:27:50.239202       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:27:50.239220       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:27:50.242102       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:27:50.242306       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:27:50.242335       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:27:50.253458       1 config.go:197] "Starting service config controller"
	I0819 17:27:50.253497       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:27:50.253518       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:27:50.253542       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:27:50.253889       1 config.go:326] "Starting node config controller"
	I0819 17:27:50.253915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:27:50.354735       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:27:50.354788       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:27:50.354817       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11d9cd3b2f49] <==
	W0819 17:27:41.845647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:27:41.845824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:41.845963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:27:41.846041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:41.846154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 17:27:41.846286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:41.846418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:27:41.846569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.722533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 17:27:42.722591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.808762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 17:27:42.808891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.853276       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:27:42.853353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.858509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:27:42.858619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.867998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:27:42.868077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.900445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:27:42.900541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.970545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:27:42.970765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:43.004003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:27:43.004103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:27:43.339820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 17:27:49 ha-431000 kubelet[2153]: I0819 17:27:49.431199    2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4-lib-modules\") pod \"kube-proxy-5l56s\" (UID: \"6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4\") " pod="kube-system/kube-proxy-5l56s"
	Aug 19 17:27:49 ha-431000 kubelet[2153]: I0819 17:27:49.431362    2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2-cni-cfg\") pod \"kindnet-lvdbg\" (UID: \"d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2\") " pod="kube-system/kindnet-lvdbg"
	Aug 19 17:27:49 ha-431000 kubelet[2153]: I0819 17:27:49.431489    2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2-lib-modules\") pod \"kindnet-lvdbg\" (UID: \"d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2\") " pod="kube-system/kindnet-lvdbg"
	Aug 19 17:27:49 ha-431000 kubelet[2153]: I0819 17:27:49.538941    2153 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Aug 19 17:27:50 ha-431000 kubelet[2153]: I0819 17:27:50.702661    2153 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5l56s" podStartSLOduration=1.702645143 podStartE2EDuration="1.702645143s" podCreationTimestamp="2024-08-19 17:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-19 17:27:50.641853647 +0000 UTC m=+5.235552265" watchObservedRunningTime="2024-08-19 17:27:50.702645143 +0000 UTC m=+5.296343754"
	Aug 19 17:27:53 ha-431000 kubelet[2153]: I0819 17:27:53.645972    2153 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lvdbg" podStartSLOduration=1.459368371 podStartE2EDuration="4.64595809s" podCreationTimestamp="2024-08-19 17:27:49 +0000 UTC" firstStartedPulling="2024-08-19 17:27:50.052845536 +0000 UTC m=+4.646544145" lastFinishedPulling="2024-08-19 17:27:53.239435255 +0000 UTC m=+7.833133864" observedRunningTime="2024-08-19 17:27:53.6449441 +0000 UTC m=+8.238642717" watchObservedRunningTime="2024-08-19 17:27:53.64595809 +0000 UTC m=+8.239656703"
	Aug 19 17:28:07 ha-431000 kubelet[2153]: I0819 17:28:07.474654    2153 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Aug 19 17:28:07 ha-431000 kubelet[2153]: I0819 17:28:07.677010    2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e68070ef-bdea-45e6-b7a8-8834534fa616-tmp\") pod \"storage-provisioner\" (UID: \"e68070ef-bdea-45e6-b7a8-8834534fa616\") " pod="kube-system/storage-provisioner"
	Aug 19 17:28:07 ha-431000 kubelet[2153]: I0819 17:28:07.677305    2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hgmm\" (UniqueName: \"kubernetes.io/projected/e68070ef-bdea-45e6-b7a8-8834534fa616-kube-api-access-6hgmm\") pod \"storage-provisioner\" (UID: \"e68070ef-bdea-45e6-b7a8-8834534fa616\") " pod="kube-system/storage-provisioner"
	Aug 19 17:28:07 ha-431000 kubelet[2153]: I0819 17:28:07.677541    2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99cvx\" (UniqueName: \"kubernetes.io/projected/625d8978-9556-45d9-a09a-f94be2492a2b-kube-api-access-99cvx\") pod \"coredns-6f6b679f8f-hr2qx\" (UID: \"625d8978-9556-45d9-a09a-f94be2492a2b\") " pod="kube-system/coredns-6f6b679f8f-hr2qx"
	Aug 19 17:28:07 ha-431000 kubelet[2153]: I0819 17:28:07.677772    2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/625d8978-9556-45d9-a09a-f94be2492a2b-config-volume\") pod \"coredns-6f6b679f8f-hr2qx\" (UID: \"625d8978-9556-45d9-a09a-f94be2492a2b\") " pod="kube-system/coredns-6f6b679f8f-hr2qx"
	Aug 19 17:28:07 ha-431000 kubelet[2153]: I0819 17:28:07.678004    2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcdfebee-b458-4811-acd1-03eed5ffb5a7-config-volume\") pod \"coredns-6f6b679f8f-vc76p\" (UID: \"dcdfebee-b458-4811-acd1-03eed5ffb5a7\") " pod="kube-system/coredns-6f6b679f8f-vc76p"
	Aug 19 17:28:07 ha-431000 kubelet[2153]: I0819 17:28:07.678225    2153 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk64b\" (UniqueName: \"kubernetes.io/projected/dcdfebee-b458-4811-acd1-03eed5ffb5a7-kube-api-access-mk64b\") pod \"coredns-6f6b679f8f-vc76p\" (UID: \"dcdfebee-b458-4811-acd1-03eed5ffb5a7\") " pod="kube-system/coredns-6f6b679f8f-vc76p"
	Aug 19 17:28:08 ha-431000 kubelet[2153]: I0819 17:28:08.756781    2153 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vc76p" podStartSLOduration=19.756767809 podStartE2EDuration="19.756767809s" podCreationTimestamp="2024-08-19 17:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-19 17:28:08.738128883 +0000 UTC m=+23.331827501" watchObservedRunningTime="2024-08-19 17:28:08.756767809 +0000 UTC m=+23.350466421"
	Aug 19 17:28:08 ha-431000 kubelet[2153]: I0819 17:28:08.802515    2153 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hr2qx" podStartSLOduration=19.802501014 podStartE2EDuration="19.802501014s" podCreationTimestamp="2024-08-19 17:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-19 17:28:08.777637223 +0000 UTC m=+23.371335840" watchObservedRunningTime="2024-08-19 17:28:08.802501014 +0000 UTC m=+23.396199628"
	Aug 19 17:28:45 ha-431000 kubelet[2153]: E0819 17:28:45.527098    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:28:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:28:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:28:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:28:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:29:45 ha-431000 kubelet[2153]: E0819 17:29:45.526642    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:29:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:29:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:29:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:29:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (192.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (700.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- rollout status deployment/busybox
E0819 10:30:29.012774    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:43.418476    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:43.424914    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:43.437459    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:43.459202    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:43.500640    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:43.581868    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:43.744059    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:44.065972    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:44.707930    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:45.989936    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:48.551879    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:30:53.673812    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:31:03.917095    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:31:24.399094    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:32:05.361460    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:33:27.285230    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:33:32.103442    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:35:29.060626    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:35:43.468359    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:36:11.175406    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- rollout status deployment/busybox: exit status 1 (10m1.823678553s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 4 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 5 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 5 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 2 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0819 10:40:29.069327    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0819 10:40:43.476408    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-2l9lq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-wfcpq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-wfcpq -- nslookup kubernetes.io: exit status 1 (124.561609ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-wfcpq does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7dff88458-wfcpq could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-x7m6m -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-2l9lq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-wfcpq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-wfcpq -- nslookup kubernetes.default: exit status 1 (122.748187ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-wfcpq does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7dff88458-wfcpq could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-x7m6m -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-2l9lq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-wfcpq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-wfcpq -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (123.863234ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-wfcpq does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7dff88458-wfcpq could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-x7m6m -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (2.114725715s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| delete  | -p functional-622000                 | functional-622000 | jenkins | v1.33.1 | 19 Aug 24 10:27 PDT | 19 Aug 24 10:27 PDT |
	| start   | -p ha-431000 --wait=true             | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:27 PDT |                     |
	|         | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|         | --driver=hyperkit                    |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- apply -f             | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:30 PDT | 19 Aug 24 10:30 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- rollout status       | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:30 PDT |                     |
	|         | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:27:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:27:09.441458    4789 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:27:09.441716    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441721    4789 out.go:358] Setting ErrFile to fd 2...
	I0819 10:27:09.441725    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441914    4789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:27:09.443405    4789 out.go:352] Setting JSON to false
	I0819 10:27:09.468451    4789 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3399,"bootTime":1724085030,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:27:09.468547    4789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:27:09.554597    4789 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:27:09.577770    4789 notify.go:220] Checking for updates...
	I0819 10:27:09.609734    4789 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:27:09.676944    4789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:09.699980    4789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:27:09.722951    4789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:27:09.744804    4789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.765726    4789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:27:09.787204    4789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:27:09.817679    4789 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 10:27:09.859821    4789 start.go:297] selected driver: hyperkit
	I0819 10:27:09.859849    4789 start.go:901] validating driver "hyperkit" against <nil>
	I0819 10:27:09.859893    4789 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:27:09.864287    4789 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.864395    4789 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:27:09.872759    4789 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:27:09.876743    4789 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.876768    4789 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:27:09.876803    4789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:27:09.877011    4789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:27:09.877072    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:09.877082    4789 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 10:27:09.877094    4789 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:27:09.877164    4789 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:09.877251    4789 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.919755    4789 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:27:09.940604    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:09.940675    4789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:27:09.940720    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:09.940918    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:09.940931    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:09.941271    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:09.941299    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json: {Name:mkf9dcbb24d8b9fbe62d81f81a7a87fec457d2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:09.941835    4789 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:09.941963    4789 start.go:364] duration metric: took 95.166µs to acquireMachinesLock for "ha-431000"
	I0819 10:27:09.941997    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:09.942082    4789 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 10:27:09.963791    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:09.964075    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.964148    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:09.974068    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51111
	I0819 10:27:09.974512    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:09.974919    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:09.974932    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:09.975172    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:09.975283    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:09.975374    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:09.975471    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:09.975492    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:09.975527    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:09.975578    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975594    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975657    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:09.975695    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975707    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975719    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:09.975729    4789 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0819 10:27:09.975800    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.975970    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:09.976388    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:09.976397    4789 main.go:141] libmachine: (ha-431000) Calling .Create
	I0819 10:27:09.976462    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.976580    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:09.976459    4799 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.976633    4789 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:10.160305    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.160220    4799 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa...
	I0819 10:27:10.258779    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.258678    4799 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0819 10:27:10.258792    4789 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0819 10:27:10.258800    4789 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0819 10:27:10.259681    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.259588    4799 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000 ...
	I0819 10:27:10.634434    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.634476    4789 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:27:10.634529    4789 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:27:10.744945    4789 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:27:10.744966    4789 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:10.744993    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745030    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745065    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:10.745094    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:10.745118    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:10.748020    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Pid is 4802
	I0819 10:27:10.748404    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:27:10.748413    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.748494    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:10.749357    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:10.749398    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:10.749412    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:10.749423    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:10.749431    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:10.755634    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:10.806699    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:10.807300    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:10.807314    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:10.807322    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:10.807335    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.184562    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:11.184575    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:11.299194    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:11.299213    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:11.299228    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:11.299236    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.300075    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:11.300086    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:12.750038    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0819 10:27:12.750054    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:12.750189    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:12.750969    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:12.751019    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:12.751030    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:12.751039    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:12.751052    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:14.752158    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0819 10:27:14.752174    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:14.752264    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:14.753040    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:14.753090    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:14.753102    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:14.753111    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:14.753117    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.754325    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0819 10:27:16.754340    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:16.754402    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:16.755326    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:16.755347    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:16.755354    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:16.755373    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:16.755390    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.856153    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:27:16.856252    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:27:16.856262    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:27:16.880804    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:27:18.757489    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0819 10:27:18.757504    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:18.757601    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:18.758394    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:18.758435    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:18.758449    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:18.758481    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:18.758495    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:20.758927    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0819 10:27:20.758946    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.759035    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.759848    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:20.759873    4789 main.go:141] libmachine: (ha-431000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:20.759888    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:20.759901    4789 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:27:20.759913    4789 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:27:20.759952    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:20.760523    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760634    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760741    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:27:20.760753    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:20.760839    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.760885    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.761678    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:27:20.761690    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:27:20.761696    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:27:20.761702    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:20.761795    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:20.761883    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.761969    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.762060    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:20.762168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:20.762361    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:20.762369    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:27:21.818394    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.818406    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:27:21.818419    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.818554    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.818654    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818747    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818841    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.818981    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.819131    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.819139    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:27:21.870784    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:27:21.870826    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:27:21.870831    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:27:21.870837    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.870976    4789 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:27:21.870986    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.871077    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.871169    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.871272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871352    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871452    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.871577    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.871711    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.871719    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:27:21.937676    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:27:21.937694    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.937826    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.937927    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938017    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938112    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.938245    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.938391    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.938402    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:27:21.996654    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.996676    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:27:21.996692    4789 buildroot.go:174] setting up certificates
	I0819 10:27:21.996701    4789 provision.go:84] configureAuth start
	I0819 10:27:21.996714    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.996873    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:21.996990    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.997094    4789 provision.go:143] copyHostCerts
	I0819 10:27:21.997133    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997201    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:27:21.997209    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997337    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:27:21.997534    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997567    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:27:21.997572    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997714    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:27:21.997882    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.997926    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:27:21.997941    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.998049    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:27:21.998203    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:27:22.044837    4789 provision.go:177] copyRemoteCerts
	I0819 10:27:22.044896    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:27:22.044908    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.045021    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.045107    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.045191    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.045288    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:22.078701    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:27:22.078779    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:27:22.098027    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:27:22.098092    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 10:27:22.117169    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:27:22.117235    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:27:22.137411    4789 provision.go:87] duration metric: took 140.68689ms to configureAuth
	I0819 10:27:22.137424    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:27:22.137558    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:22.137574    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:22.137700    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.137783    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.137859    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.137942    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.138028    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.138134    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.138266    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.138274    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:27:22.191384    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:27:22.191397    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:27:22.191469    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:27:22.191481    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.191636    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.191724    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191834    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191924    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.192051    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.192193    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.192236    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:27:22.256138    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:27:22.256165    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.256301    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.256391    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256475    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256578    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.256695    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.256839    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.256851    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:27:23.816844    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:27:23.816860    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:27:23.816871    4789 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0819 10:27:23.817008    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:27:23.817016    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:27:23.817020    4789 client.go:171] duration metric: took 13.841219093s to LocalClient.Create
	I0819 10:27:23.817036    4789 start.go:167] duration metric: took 13.84126124s to libmachine.API.Create "ha-431000"
	I0819 10:27:23.817044    4789 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:27:23.817051    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:27:23.817063    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.817219    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:27:23.817232    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.817321    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.817402    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.817497    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.817595    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.852993    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:27:23.857771    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:27:23.857792    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:27:23.857909    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:27:23.858094    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:27:23.858100    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:27:23.858323    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:27:23.868639    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:23.894485    4789 start.go:296] duration metric: took 77.430316ms for postStartSetup
	I0819 10:27:23.894509    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:23.895099    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.895256    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:23.895585    4789 start.go:128] duration metric: took 13.953185373s to createHost
	I0819 10:27:23.895598    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.895691    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.895790    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895879    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895966    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.896069    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:23.896228    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:23.896236    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:27:23.956133    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088443.744394113
	
	I0819 10:27:23.956145    4789 fix.go:216] guest clock: 1724088443.744394113
	I0819 10:27:23.956151    4789 fix.go:229] Guest: 2024-08-19 10:27:23.744394113 -0700 PDT Remote: 2024-08-19 10:27:23.895593 -0700 PDT m=+14.491162031 (delta=-151.198887ms)
	I0819 10:27:23.956169    4789 fix.go:200] guest clock delta is within tolerance: -151.198887ms
	I0819 10:27:23.956173    4789 start.go:83] releasing machines lock for "ha-431000", held for 14.013893151s
	I0819 10:27:23.956192    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956322    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.956416    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956749    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956860    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956951    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:27:23.956980    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957023    4789 ssh_runner.go:195] Run: cat /version.json
	I0819 10:27:23.957036    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957073    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957109    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957170    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957184    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957292    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957350    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.957384    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:24.032926    4789 ssh_runner.go:195] Run: systemctl --version
	I0819 10:27:24.037723    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:27:24.041939    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:27:24.041985    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:27:24.055424    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:27:24.055435    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.055529    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.070257    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:27:24.079169    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:27:24.088264    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.088319    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:27:24.097172    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.105902    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:27:24.114585    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.123406    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:27:24.132626    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:27:24.141378    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:27:24.150490    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:27:24.158980    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:27:24.167068    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:27:24.175030    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.269460    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:27:24.289328    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.289405    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:27:24.304907    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.317291    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:27:24.330289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.340851    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.351456    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:27:24.376914    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.387402    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.402522    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:27:24.405426    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:27:24.412799    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:27:24.426019    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:27:24.528550    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:27:24.636829    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.636893    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:27:24.652027    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.753641    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:27.037286    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.283575266s)
	I0819 10:27:27.037346    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:27:27.047775    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:27:27.062961    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.074027    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:27:27.172330    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:27:27.284593    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.395779    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:27:27.409552    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.420868    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.532356    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:27:27.591558    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:27:27.591636    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:27:27.595967    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:27:27.596013    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:27:27.599275    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:27:27.625101    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:27:27.625173    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.642636    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.693299    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:27:27.693355    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:27.693783    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:27:27.698129    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:27.708916    4789 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:27:27.708982    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:27.709038    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:27.721971    4789 docker.go:685] Got preloaded images: 
	I0819 10:27:27.721984    4789 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0819 10:27:27.722034    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:27.730353    4789 ssh_runner.go:195] Run: which lz4
	I0819 10:27:27.733218    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 10:27:27.733323    4789 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 10:27:27.736425    4789 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 10:27:27.736445    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0819 10:27:28.750864    4789 docker.go:649] duration metric: took 1.017557348s to copy over tarball
	I0819 10:27:28.750956    4789 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 10:27:31.074672    4789 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.323648699s)
	I0819 10:27:31.074688    4789 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 10:27:31.100633    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:31.109680    4789 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0819 10:27:31.123335    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:31.234501    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:33.578614    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.344043512s)
	I0819 10:27:33.578701    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:33.592021    4789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 10:27:33.592040    4789 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:27:33.592048    4789 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:27:33.592132    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:27:33.592198    4789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:27:33.629283    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:33.629295    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:33.629309    4789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:27:33.629329    4789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:27:33.629424    4789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:27:33.629439    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:27:33.629491    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:27:33.642904    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:27:33.642969    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:27:33.643018    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:27:33.652008    4789 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:27:33.652070    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:27:33.660066    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:27:33.673571    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:27:33.686700    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:27:33.700085    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0819 10:27:33.713804    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:27:33.716661    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:33.726684    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:33.822205    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:27:33.836833    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:27:33.836844    4789 certs.go:194] generating shared ca certs ...
	I0819 10:27:33.836855    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.837051    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:27:33.837132    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:27:33.837142    4789 certs.go:256] generating profile certs ...
	I0819 10:27:33.837189    4789 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:27:33.837203    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0819 10:27:33.888319    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt ...
	I0819 10:27:33.888333    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt: {Name:mk2ecc34873277fbe11bf267ec0d97684e18e84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888666    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key ...
	I0819 10:27:33.888675    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key: {Name:mk51abee214c838f4621902241303fe73ba93aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888900    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e
	I0819 10:27:33.888915    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0819 10:27:34.060027    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e ...
	I0819 10:27:34.060046    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e: {Name:mk108eb9cf88ab2aae15883e4a3724751adb3118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060347    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e ...
	I0819 10:27:34.060356    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e: {Name:mk8fae11cce9c9a45d3e151953d1ee9ab2cc82d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060557    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:27:34.060759    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:27:34.060929    4789 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:27:34.060943    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0819 10:27:34.243675    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0819 10:27:34.243690    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mkeb1eac7ee8b3901067565b7ff883710f2d1088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244061    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key ...
	I0819 10:27:34.244069    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key: {Name:mkc1afcd7a6a9a572716155e33c32e7def81650b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244312    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:27:34.244340    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:27:34.244378    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:27:34.244398    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:27:34.244416    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:27:34.244448    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:27:34.244486    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:27:34.244521    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:27:34.244615    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:27:34.244666    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:27:34.244675    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:27:34.244748    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:27:34.244776    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:27:34.244831    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:27:34.244909    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:34.244942    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.244990    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.245007    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.245522    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:27:34.267677    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:27:34.287348    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:27:34.309971    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:27:34.330910    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 10:27:34.350036    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:27:34.370663    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:27:34.390457    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:27:34.410226    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:27:34.431025    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:27:34.451232    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:27:34.471133    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:27:34.487758    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:27:34.493769    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:27:34.506308    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511941    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511996    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.519851    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:27:34.531120    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:27:34.540803    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544302    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544341    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.548724    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:27:34.558817    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:27:34.568088    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571692    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571731    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.575999    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:27:34.585057    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:27:34.588207    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:27:34.588251    4789 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:34.588345    4789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:27:34.601241    4789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:27:34.609838    4789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:27:34.618794    4789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:27:34.627200    4789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:27:34.627208    4789 kubeadm.go:157] found existing configuration files:
	
	I0819 10:27:34.627243    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:27:34.635162    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:27:34.635198    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:27:34.643336    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:27:34.651247    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:27:34.651280    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:27:34.659346    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.667240    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:27:34.667281    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.675386    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:27:34.684053    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:27:34.684105    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:27:34.692357    4789 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 10:27:34.751991    4789 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:27:34.752160    4789 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:27:34.833970    4789 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:27:34.834062    4789 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:27:34.834153    4789 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:27:34.842513    4789 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:27:34.863067    4789 out.go:235]   - Generating certificates and keys ...
	I0819 10:27:34.863126    4789 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:27:34.863179    4789 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:27:35.003012    4789 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:27:35.766829    4789 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:27:35.976153    4789 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:27:36.134850    4789 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:27:36.228947    4789 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:27:36.229166    4789 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.375842    4789 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:27:36.375934    4789 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.597289    4789 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:27:36.907219    4789 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:27:37.426404    4789 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:27:37.426585    4789 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:27:37.566387    4789 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:27:38.000620    4789 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:27:38.121335    4789 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:27:38.179042    4789 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:27:38.231270    4789 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:27:38.231752    4789 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:27:38.233818    4789 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:27:38.255454    4789 out.go:235]   - Booting up control plane ...
	I0819 10:27:38.255535    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:27:38.255605    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:27:38.255655    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:27:38.255734    4789 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:27:38.255809    4789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:27:38.255842    4789 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:27:38.364951    4789 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:27:38.365069    4789 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:27:39.366309    4789 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001984632s
	I0819 10:27:39.366388    4789 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:27:45.029099    4789 kubeadm.go:310] [api-check] The API server is healthy after 5.666724975s
	I0819 10:27:45.039440    4789 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:27:45.046481    4789 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:27:45.059797    4789 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:27:45.059959    4789 kubeadm.go:310] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:27:45.067482    4789 kubeadm.go:310] [bootstrap-token] Using token: rrr6yu.ivgebthw63l7ehzv
	I0819 10:27:45.106820    4789 out.go:235]   - Configuring RBAC rules ...
	I0819 10:27:45.107004    4789 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:27:45.110638    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:27:45.151902    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:27:45.154406    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:27:45.156223    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:27:45.158190    4789 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:27:45.434935    4789 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:27:45.846068    4789 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:27:46.434136    4789 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:27:46.434675    4789 kubeadm.go:310] 
	I0819 10:27:46.434724    4789 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:27:46.434728    4789 kubeadm.go:310] 
	I0819 10:27:46.434798    4789 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:27:46.434808    4789 kubeadm.go:310] 
	I0819 10:27:46.434829    4789 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:27:46.434881    4789 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:27:46.434925    4789 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:27:46.434930    4789 kubeadm.go:310] 
	I0819 10:27:46.434974    4789 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:27:46.434984    4789 kubeadm.go:310] 
	I0819 10:27:46.435035    4789 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:27:46.435041    4789 kubeadm.go:310] 
	I0819 10:27:46.435080    4789 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:27:46.435139    4789 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:27:46.435197    4789 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:27:46.435204    4789 kubeadm.go:310] 
	I0819 10:27:46.435268    4789 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:27:46.435333    4789 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:27:46.435337    4789 kubeadm.go:310] 
	I0819 10:27:46.435410    4789 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435498    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd \
	I0819 10:27:46.435515    4789 kubeadm.go:310] 	--control-plane 
	I0819 10:27:46.435520    4789 kubeadm.go:310] 
	I0819 10:27:46.435589    4789 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:27:46.435594    4789 kubeadm.go:310] 
	I0819 10:27:46.435664    4789 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435746    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd 
	I0819 10:27:46.435997    4789 kubeadm.go:310] W0819 17:27:34.545490    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436229    4789 kubeadm.go:310] W0819 17:27:34.546600    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436316    4789 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:27:46.436331    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:46.436337    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:46.458203    4789 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 10:27:46.517773    4789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 10:27:46.523858    4789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 10:27:46.523872    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 10:27:46.539513    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 10:27:46.759807    4789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:27:46.759878    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:46.759883    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0819 10:27:46.777623    4789 ops.go:34] apiserver oom_adj: -16
	I0819 10:27:46.926523    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.427175    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.927281    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.428033    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.926686    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.426608    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.926666    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:50.010199    4789 kubeadm.go:1113] duration metric: took 3.25030545s to wait for elevateKubeSystemPrivileges
	I0819 10:27:50.010216    4789 kubeadm.go:394] duration metric: took 15.42163041s to StartCluster
	I0819 10:27:50.010227    4789 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.010325    4789 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.010762    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.011021    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:27:50.011033    4789 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.011050    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:27:50.011076    4789 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:27:50.011116    4789 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0819 10:27:50.011120    4789 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0819 10:27:50.011148    4789 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0819 10:27:50.011152    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.011155    4789 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0819 10:27:50.011186    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.011415    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011420    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011430    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.011431    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.020667    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51134
	I0819 10:27:50.021171    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021230    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51136
	I0819 10:27:50.021523    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021533    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.021634    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021753    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.021940    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.022115    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.022146    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.022229    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.022806    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.022988    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.023051    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.024924    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.025156    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:27:50.025529    4789 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:27:50.025699    4789 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0819 10:27:50.025720    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.025937    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.025963    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.031229    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51138
	I0819 10:27:50.031604    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.031942    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.031953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.032154    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.032270    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.032358    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.032435    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.033436    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.034958    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51140
	I0819 10:27:50.035269    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.035586    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.035596    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.035796    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.036148    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.036165    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.044937    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51142
	I0819 10:27:50.045312    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.045667    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.045680    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.045893    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.045996    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.046077    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.046151    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.047102    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.047225    4789 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.047234    4789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:27:50.047243    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.047325    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.047417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.047495    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.047571    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.056055    4789 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:27:50.076134    4789 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.076146    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:27:50.076163    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.076310    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.076417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.076556    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.076664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.113554    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:27:50.127003    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.262022    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.488277    4789 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0819 10:27:50.488318    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488327    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488534    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488547    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488556    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488563    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488564    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488681    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488704    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488718    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488767    4789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:27:50.488780    4789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:27:50.488862    4789 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 10:27:50.488867    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.488877    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.488882    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495057    4789 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:27:50.495477    4789 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 10:27:50.495484    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.495490    4789 round_trippers.go:473]     Content-Type: application/json
	I0819 10:27:50.495494    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.498504    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:27:50.498632    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.498641    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.498797    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.498806    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.498814    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649595    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649607    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.649833    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.649843    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649848    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.649874    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649893    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.650019    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.650028    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.650044    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.673040    4789 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 10:27:50.709732    4789 addons.go:510] duration metric: took 698.654107ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 10:27:50.709774    4789 start.go:246] waiting for cluster config update ...
	I0819 10:27:50.709799    4789 start.go:255] writing updated cluster config ...
	I0819 10:27:50.746763    4789 out.go:201] 
	I0819 10:27:50.768467    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.768565    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.790908    4789 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:27:50.832651    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:50.832673    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:50.832790    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:50.832801    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:50.832852    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.833261    4789 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:50.833314    4789 start.go:364] duration metric: took 41.162µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:27:50.833329    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.833382    4789 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0819 10:27:50.854688    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:50.854833    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.854870    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.864309    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51147
	I0819 10:27:50.864640    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.864951    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.864963    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.865175    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.865294    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:27:50.865374    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:27:50.865472    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:50.865485    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:50.865515    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:50.865553    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865565    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865607    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:50.865634    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865649    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865666    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:50.865676    4789 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0819 10:27:50.865754    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.865776    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:27:50.891966    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:50.891987    4789 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0819 10:27:50.892145    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.892330    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:50.892137    4845 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:50.892421    4789 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:51.078705    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.078630    4845 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa...
	I0819 10:27:51.171843    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.171751    4845 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0819 10:27:51.171860    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0819 10:27:51.171868    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0819 10:27:51.172685    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.172591    4845 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02 ...
	I0819 10:27:51.544884    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.544910    4789 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:27:51.544922    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:27:51.571631    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:27:51.571653    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:51.571680    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571706    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571739    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:51.571767    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:51.571780    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:51.574668    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Pid is 4850
	I0819 10:27:51.575734    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:27:51.575757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.575783    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:51.576702    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:51.576759    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:51.576778    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:51.576816    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:51.576830    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:51.576844    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:51.582262    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:51.590515    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:51.591362    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:51.591388    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:51.591397    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:51.591407    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:51.978930    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:51.978947    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:52.094059    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:52.094091    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:52.094127    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:52.094142    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:52.094869    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:52.094879    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:53.577521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0819 10:27:53.577541    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:53.577636    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:53.578446    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:53.578461    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:53.578472    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:53.578481    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:53.578489    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:53.578507    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:55.579485    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0819 10:27:55.579501    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:55.579576    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:55.580358    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:55.580387    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:55.580414    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:55.580426    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:55.580434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:55.580442    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.581588    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0819 10:27:57.581603    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:57.581681    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:57.582486    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:57.582510    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:57.582521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:57.582530    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:57.582540    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:57.582548    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.680321    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:27:57.680434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:27:57.680445    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:27:57.704982    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:27:59.583757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0819 10:27:59.583772    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:59.583842    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:59.584652    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:59.584696    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:59.584710    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:59.584720    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:59.584729    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:59.584737    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:28:01.585137    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0819 10:28:01.585154    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.585235    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.585996    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:28:01.586042    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:28:01.586055    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:28:01.586080    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:28:01.586086    4789 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:28:01.586098    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:01.586694    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586794    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586889    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:28:01.586896    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:28:01.586980    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.587029    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.587790    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:28:01.587796    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:28:01.587800    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:28:01.587804    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:01.587881    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:01.587956    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588060    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588138    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:01.588256    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:01.588435    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:01.588443    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:28:02.645180    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.645193    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:28:02.645198    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.645326    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.645422    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645501    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645583    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.645718    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.645869    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.645877    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:28:02.700961    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:28:02.700992    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:28:02.700998    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:28:02.701003    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701132    4789 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:28:02.701143    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701237    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.701327    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.701424    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701502    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701588    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.701720    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.701855    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.701864    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:28:02.773500    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:28:02.773515    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.773649    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.773737    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773840    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773945    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.774071    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.774226    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.774237    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:28:02.838956    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.838971    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:28:02.838984    4789 buildroot.go:174] setting up certificates
	I0819 10:28:02.838992    4789 provision.go:84] configureAuth start
	I0819 10:28:02.838998    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.839135    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:02.839223    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.839322    4789 provision.go:143] copyHostCerts
	I0819 10:28:02.839347    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839393    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:28:02.839399    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839532    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:28:02.839738    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839769    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:28:02.839774    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839845    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:28:02.839992    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840021    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:28:02.840025    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840090    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:28:02.840244    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:28:02.878856    4789 provision.go:177] copyRemoteCerts
	I0819 10:28:02.878899    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:28:02.878912    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.879041    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.879132    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.879231    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.879330    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:02.914748    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:28:02.914819    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:28:02.934608    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:28:02.934673    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:28:02.954833    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:28:02.954900    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:28:02.974652    4789 provision.go:87] duration metric: took 135.649275ms to configureAuth
	I0819 10:28:02.974666    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:28:02.974809    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:02.974823    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:02.974958    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.975063    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.975147    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975219    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975328    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.975454    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.975601    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.975609    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:28:03.033628    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:28:03.033639    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:28:03.033715    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:28:03.033730    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.033861    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.033950    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034053    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034140    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.034264    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.034412    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.034459    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:28:03.102644    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:28:03.102663    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.102811    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.102898    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.102999    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.103120    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.103244    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.103390    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.103404    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:28:04.637367    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:28:04.637381    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:28:04.637388    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0819 10:28:04.637524    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:28:04.637530    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:28:04.637534    4789 client.go:171] duration metric: took 13.771742286s to LocalClient.Create
	I0819 10:28:04.637544    4789 start.go:167] duration metric: took 13.771771513s to libmachine.API.Create "ha-431000"
	I0819 10:28:04.637550    4789 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:28:04.637557    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:28:04.637566    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.637712    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:28:04.637723    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.637834    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.637926    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.638026    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.638127    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.678475    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:28:04.682965    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:28:04.682980    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:28:04.683079    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:28:04.683246    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:28:04.683253    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:28:04.683434    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:28:04.695086    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:04.723279    4789 start.go:296] duration metric: took 85.720185ms for postStartSetup
	I0819 10:28:04.723311    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:04.723943    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.724123    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:28:04.724446    4789 start.go:128] duration metric: took 13.890752069s to createHost
	I0819 10:28:04.724460    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.724558    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.724679    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724786    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724871    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.724979    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:04.725097    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:04.725103    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:28:04.784682    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088484.852271103
	
	I0819 10:28:04.784694    4789 fix.go:216] guest clock: 1724088484.852271103
	I0819 10:28:04.784698    4789 fix.go:229] Guest: 2024-08-19 10:28:04.852271103 -0700 PDT Remote: 2024-08-19 10:28:04.724454 -0700 PDT m=+55.319126445 (delta=127.817103ms)
	I0819 10:28:04.784725    4789 fix.go:200] guest clock delta is within tolerance: 127.817103ms
	I0819 10:28:04.784731    4789 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.951104834s
	I0819 10:28:04.784750    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.784884    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.807240    4789 out.go:177] * Found network options:
	I0819 10:28:04.829600    4789 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:28:04.851548    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.851607    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852495    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852747    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852876    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:28:04.852915    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:28:04.852962    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.853080    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:28:04.853100    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.853127    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853372    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853394    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853596    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853633    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853742    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853804    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.853880    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:28:04.886788    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:28:04.886847    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:28:04.931189    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:28:04.931209    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:04.931315    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:04.947443    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:28:04.955693    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:28:04.964155    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:28:04.964197    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:28:04.972493    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.980548    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:28:04.988709    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.996856    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:28:05.005271    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:28:05.013575    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:28:05.021801    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:28:05.030285    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:28:05.037842    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:28:05.045332    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.140730    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:28:05.159555    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:05.159625    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:28:05.177222    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.189624    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:28:05.203743    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.214606    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.224836    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:28:05.249649    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.261132    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:05.276191    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:28:05.279129    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:28:05.287175    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:28:05.300748    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:28:05.396444    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:28:05.505778    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:28:05.505805    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:28:05.520914    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.616215    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:28:07.911303    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.295016426s)
	I0819 10:28:07.911366    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:28:07.923467    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:28:07.938312    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:07.949283    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:28:08.046922    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:28:08.152880    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.256594    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:28:08.271339    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:08.283089    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.384798    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:28:08.441813    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:28:08.441881    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:28:08.446421    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:28:08.446473    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:28:08.449807    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:28:08.479621    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:28:08.479690    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.496571    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.537488    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:28:08.579078    4789 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:28:08.603340    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:08.603786    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:28:08.608372    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:08.618166    4789 mustload.go:65] Loading cluster: ha-431000
	I0819 10:28:08.618314    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:08.618533    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.618549    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.627122    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51170
	I0819 10:28:08.627459    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.627845    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.627857    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.628097    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.628239    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:28:08.628342    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:08.628430    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:28:08.629353    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:08.629592    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.629608    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.638041    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51172
	I0819 10:28:08.638388    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.638753    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.638770    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.638992    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.639108    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:08.639209    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:28:08.639216    4789 certs.go:194] generating shared ca certs ...
	I0819 10:28:08.639225    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.639357    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:28:08.639425    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:28:08.639434    4789 certs.go:256] generating profile certs ...
	I0819 10:28:08.639538    4789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:28:08.639562    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788
	I0819 10:28:08.639575    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0819 10:28:08.693749    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 ...
	I0819 10:28:08.693766    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788: {Name:mkade16cb35e521e9e55fc42d7cb129c8b94b782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694149    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 ...
	I0819 10:28:08.694160    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788: {Name:mkeae0a28d48da45f84299952289f15db5f944f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694378    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:28:08.694703    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:28:08.694954    4789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:28:08.694964    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:28:08.694987    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:28:08.695006    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:28:08.695024    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:28:08.695042    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:28:08.695060    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:28:08.695078    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:28:08.695096    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:28:08.695175    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:28:08.695213    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:28:08.695228    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:28:08.695261    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:28:08.695290    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:28:08.695321    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:28:08.695400    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:08.695438    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:28:08.695462    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:28:08.695482    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:08.695511    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:08.695664    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:08.695745    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:08.695845    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:08.695925    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:08.729193    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:28:08.736181    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:28:08.748665    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:28:08.751826    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:28:08.773481    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:28:08.777252    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:28:08.787546    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:28:08.791015    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:28:08.800105    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:28:08.803218    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:28:08.812240    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:28:08.815351    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:28:08.824083    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:28:08.844052    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:28:08.864107    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:28:08.884612    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:28:08.904284    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 10:28:08.924397    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:28:08.944026    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:28:08.964689    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:28:08.984934    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:28:09.004413    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:28:09.024043    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:28:09.043924    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:28:09.058066    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:28:09.071585    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:28:09.085080    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:28:09.098536    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:28:09.112048    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:28:09.125242    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:28:09.139717    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:28:09.144032    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:28:09.152602    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.155967    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.156009    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.160192    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:28:09.168568    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:28:09.176997    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180533    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180568    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.184799    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:28:09.193356    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:28:09.201811    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205453    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205494    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.209760    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:28:09.218392    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:28:09.222392    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:28:09.222437    4789 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:28:09.222498    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:28:09.222516    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:28:09.222559    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:28:09.234408    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:28:09.234452    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:28:09.234506    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.242939    4789 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:28:09.242994    4789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0819 10:28:09.251336    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 10:28:11.797289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:28:11.809069    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.809192    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.812267    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:28:11.812291    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:28:12.469259    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.469340    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.472845    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:28:12.472869    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:28:13.348737    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.348820    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.352429    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:28:13.352449    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:28:13.542994    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:28:13.550937    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:28:13.564187    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:28:13.577654    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:28:13.591433    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:28:13.594347    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:13.604347    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:13.710422    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:13.730131    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:13.730407    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:13.730448    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:13.739474    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51199
	I0819 10:28:13.739816    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:13.740174    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:13.740190    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:13.740438    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:13.740564    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:13.740661    4789 start.go:317] joinCluster: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:28:13.740750    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 10:28:13.740767    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:13.740857    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:13.740939    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:13.741027    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:13.741101    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:13.815525    4789 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:13.815563    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0819 10:28:41.108330    4789 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (27.292143754s)
	I0819 10:28:41.108351    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 10:28:41.504714    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000-m02 minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=false
	I0819 10:28:41.585348    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-431000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 10:28:41.693283    4789 start.go:319] duration metric: took 27.951997328s to joinCluster
	I0819 10:28:41.693326    4789 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:41.693537    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:41.715528    4789 out.go:177] * Verifying Kubernetes components...
	I0819 10:28:41.790354    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:41.995139    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:42.017369    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:28:42.017608    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:28:42.017650    4789 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:28:42.017827    4789 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:42.017919    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.017925    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.017930    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.017935    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.025432    4789 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 10:28:42.518902    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.518917    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.518923    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.518927    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.521742    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:43.018396    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.018411    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.018417    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.018421    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.021454    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:43.518031    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.518083    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.518106    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.518116    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.522999    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:44.018193    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.018219    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.018231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.018237    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.021854    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:44.022387    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:44.518152    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.518189    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.518196    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.518199    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.520027    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.019772    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.019792    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.019799    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.019803    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.021628    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.518039    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.518053    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.518059    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.518064    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.520113    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:46.018198    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.018232    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.018239    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.018243    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.020136    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.518474    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.518490    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.518496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.518499    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.520505    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.520916    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:47.019124    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.019150    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.019162    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.022729    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:47.518316    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.518341    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.518351    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.518356    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.520471    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:48.019594    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.019620    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.019630    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.019636    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.023447    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:48.518492    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.518526    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.518583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.518593    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.523421    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:48.523787    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:49.019217    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.019242    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.019254    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.019260    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.022862    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:49.520299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.520324    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.520337    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.520342    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.523532    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.019383    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.019412    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.019424    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.019430    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.022847    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.519489    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.519503    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.519511    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.519515    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.522131    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:51.019130    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.019153    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.019163    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.022497    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:51.022894    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:51.518391    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.518448    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.518465    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.518476    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.521848    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.019014    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.019045    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.019103    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.019117    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.022339    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.519630    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.519644    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.519651    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.519655    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.522019    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.018435    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.018460    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.018472    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.018480    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.021850    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:53.518299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.518340    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.518349    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.518355    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.520795    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.521268    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:54.020380    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.020406    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.020418    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.020423    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.024178    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:54.519346    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.519364    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.519383    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.519387    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.521155    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:55.020400    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.020425    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.020437    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.020444    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.024326    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:55.519229    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.519245    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.519264    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.519268    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.521435    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:55.521852    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:56.019678    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.019703    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.019714    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.019719    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.023317    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:56.518539    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.518563    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.518576    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.518581    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.521781    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.020424    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.020449    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.020460    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.020465    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.024114    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.519399    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.519428    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.519468    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.519475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.522788    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.523223    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:58.018734    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.018759    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.018770    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.018777    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.022242    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.518348    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.518359    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.518371    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.518375    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.522907    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.523168    4789 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:28:58.523182    4789 node_ready.go:38] duration metric: took 16.504973252s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:58.523189    4789 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:28:58.523237    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:28:58.523243    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.523249    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.523253    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.528083    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.532699    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.532761    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:28:58.532768    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.532774    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.532776    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.535978    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.536344    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.536351    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.536358    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.536361    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.538061    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.538368    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.538377    4789 pod_ready.go:82] duration metric: took 5.660556ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538383    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538413    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:28:58.538417    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.538423    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.538428    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.540013    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.540457    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.540465    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.540471    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.540475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.542120    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.542393    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.542400    4789 pod_ready.go:82] duration metric: took 4.011453ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542406    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542439    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:28:58.542444    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.542449    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.542454    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.543986    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.544340    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.544347    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.544353    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.544356    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.545868    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.546173    4789 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.546181    4789 pod_ready.go:82] duration metric: took 3.769725ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546187    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546221    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:28:58.546226    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.546231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.546234    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.547638    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.548110    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.548118    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.548123    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.548127    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.549514    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.549853    4789 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.549860    4789 pod_ready.go:82] duration metric: took 3.668598ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.549868    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.718822    4789 request.go:632] Waited for 168.888912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718861    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718867    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.718872    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.718876    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.721032    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:58.919673    4789 request.go:632] Waited for 198.011193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919731    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919740    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.919750    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.919807    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.923236    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.923670    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.923682    4789 pod_ready.go:82] duration metric: took 373.799986ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.923691    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.119399    4789 request.go:632] Waited for 195.629207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119559    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119572    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.119583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.119589    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.122804    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.318619    4789 request.go:632] Waited for 195.030736ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318674    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318695    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.318702    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.318705    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.320812    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.321165    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.321173    4789 pod_ready.go:82] duration metric: took 397.466691ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.321180    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.520541    4789 request.go:632] Waited for 199.292765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520642    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520652    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.520663    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.520672    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.524463    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.718728    4789 request.go:632] Waited for 192.615056ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718803    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718811    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.718818    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.718823    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.720955    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.721397    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.721407    4789 pod_ready.go:82] duration metric: took 400.213219ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.721415    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.918907    4789 request.go:632] Waited for 197.434904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919004    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919014    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.919024    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.919030    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.922451    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.119192    4789 request.go:632] Waited for 196.220574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119263    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119272    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.119286    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.119297    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.122630    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.122957    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.122968    4789 pod_ready.go:82] duration metric: took 401.538458ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.122977    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.320524    4789 request.go:632] Waited for 197.475989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320660    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320672    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.320681    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.320689    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.323985    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.519403    4789 request.go:632] Waited for 194.628597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519535    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519546    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.519560    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.519568    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.523121    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.523435    4789 pod_ready.go:93] pod "kube-proxy-5h7j2" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.523449    4789 pod_ready.go:82] duration metric: took 400.456993ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.523457    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.718666    4789 request.go:632] Waited for 195.15054ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718742    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718752    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.718786    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.718800    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.721920    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.918782    4789 request.go:632] Waited for 196.40919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918873    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918882    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.918896    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.918906    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.922355    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.922815    4789 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.922824    4789 pod_ready.go:82] duration metric: took 399.351509ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.922830    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.118854    4789 request.go:632] Waited for 195.977175ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118950    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118965    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.118981    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.118987    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.122683    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.318886    4789 request.go:632] Waited for 195.887859ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319029    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319042    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.319053    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.319063    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.322689    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.323187    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.323200    4789 pod_ready.go:82] duration metric: took 400.355182ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.323208    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.518928    4789 request.go:632] Waited for 195.662505ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519043    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519057    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.519070    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.519077    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.522736    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.718819    4789 request.go:632] Waited for 195.65197ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718885    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718891    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.718899    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.718905    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.721246    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:29:01.721682    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.721691    4789 pod_ready.go:82] duration metric: took 398.467113ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.721701    4789 pod_ready.go:39] duration metric: took 3.198431164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:29:01.721718    4789 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:29:01.721774    4789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:29:01.735634    4789 api_server.go:72] duration metric: took 20.041851081s to wait for apiserver process to appear ...
	I0819 10:29:01.735647    4789 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:29:01.735663    4789 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:29:01.738815    4789 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:29:01.738848    4789 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:29:01.738854    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.738860    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.738864    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.739526    4789 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:29:01.739580    4789 api_server.go:141] control plane version: v1.31.0
	I0819 10:29:01.739589    4789 api_server.go:131] duration metric: took 3.937962ms to wait for apiserver health ...
	I0819 10:29:01.739594    4789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:29:01.918638    4789 request.go:632] Waited for 178.995687ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918733    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918745    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.918757    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.918762    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.922864    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:01.926606    4789 system_pods.go:59] 17 kube-system pods found
	I0819 10:29:01.926628    4789 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:01.926633    4789 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:01.926636    4789 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:01.926640    4789 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:01.926642    4789 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:01.926645    4789 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:01.926647    4789 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:01.926650    4789 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:01.926653    4789 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:01.926656    4789 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:01.926659    4789 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:01.926666    4789 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:01.926669    4789 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:01.926672    4789 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:01.926674    4789 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:01.926677    4789 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:01.926680    4789 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:01.926684    4789 system_pods.go:74] duration metric: took 187.080965ms to wait for pod list to return data ...
	I0819 10:29:01.926689    4789 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:29:02.119406    4789 request.go:632] Waited for 192.625822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119507    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119517    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.119528    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.119535    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.123120    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.123283    4789 default_sa.go:45] found service account: "default"
	I0819 10:29:02.123293    4789 default_sa.go:55] duration metric: took 196.595366ms for default service account to be created ...
	I0819 10:29:02.123300    4789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:29:02.319795    4789 request.go:632] Waited for 196.43255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319928    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319939    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.319947    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.319954    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.324586    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:02.328058    4789 system_pods.go:86] 17 kube-system pods found
	I0819 10:29:02.328071    4789 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:02.328075    4789 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:02.328078    4789 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:02.328083    4789 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:02.328086    4789 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:02.328088    4789 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:02.328091    4789 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:02.328093    4789 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:02.328096    4789 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:02.328098    4789 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:02.328101    4789 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:02.328103    4789 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:02.328106    4789 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:02.328109    4789 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:02.328112    4789 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:02.328115    4789 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:02.328117    4789 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:02.328122    4789 system_pods.go:126] duration metric: took 204.813151ms to wait for k8s-apps to be running ...
	I0819 10:29:02.328133    4789 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:29:02.328183    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:29:02.340002    4789 system_svc.go:56] duration metric: took 11.865981ms WaitForService to wait for kubelet
	I0819 10:29:02.340017    4789 kubeadm.go:582] duration metric: took 20.646222268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:29:02.340034    4789 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:29:02.518831    4789 request.go:632] Waited for 178.726274ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518969    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518980    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.518991    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.518998    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.522659    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.523326    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523339    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523348    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523351    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523354    4789 node_conditions.go:105] duration metric: took 183.311856ms to run NodePressure ...
	I0819 10:29:02.523361    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:29:02.523378    4789 start.go:255] writing updated cluster config ...
	I0819 10:29:02.544110    4789 out.go:201] 
	I0819 10:29:02.566227    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:02.566358    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.588965    4789 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:29:02.630777    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:29:02.630803    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:29:02.630953    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:29:02.630966    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:29:02.631053    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.631767    4789 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:29:02.631849    4789 start.go:364] duration metric: took 64.609µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:29:02.631869    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:29:02.631978    4789 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0819 10:29:02.652968    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:29:02.653116    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:29:02.653158    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:29:02.663539    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51204
	I0819 10:29:02.663925    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:29:02.664263    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:29:02.664277    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:29:02.664539    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:29:02.664672    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:02.664758    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:02.664867    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:29:02.664899    4789 client.go:168] LocalClient.Create starting
	I0819 10:29:02.664932    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:29:02.664992    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665005    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665051    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:29:02.665087    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665103    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665116    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:29:02.665122    4789 main.go:141] libmachine: (ha-431000-m03) Calling .PreCreateCheck
	I0819 10:29:02.665218    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.665228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:02.674109    4789 main.go:141] libmachine: Creating machine...
	I0819 10:29:02.674126    4789 main.go:141] libmachine: (ha-431000-m03) Calling .Create
	I0819 10:29:02.674302    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.674550    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.674293    4918 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:29:02.674675    4789 main.go:141] libmachine: (ha-431000-m03) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:29:02.956098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.955977    4918 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa...
	I0819 10:29:03.041212    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.041121    4918 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk...
	I0819 10:29:03.041230    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing magic tar header
	I0819 10:29:03.041239    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing SSH key tar header
	I0819 10:29:03.042098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.042003    4918 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03 ...
	I0819 10:29:03.582755    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.582783    4789 main.go:141] libmachine: (ha-431000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:29:03.582846    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:29:03.618942    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:29:03.618960    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:29:03.619021    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619049    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619085    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:29:03.619116    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:29:03.619133    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:29:03.621990    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Pid is 4921
	I0819 10:29:03.622461    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:29:03.622497    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.622585    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:03.623424    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:03.623486    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:03.623500    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:03.623537    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:03.623548    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:03.623558    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:03.623568    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:03.629643    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:29:03.638725    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:29:03.639577    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:03.639599    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:03.639609    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:03.639622    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.022361    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:29:04.022375    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:29:04.137228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:04.137262    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:04.137274    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:04.137284    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.138001    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:29:04.138016    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:29:05.623879    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 1
	I0819 10:29:05.623896    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:05.624023    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:05.624809    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:05.624873    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:05.624888    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:05.624904    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:05.624917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:05.624926    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:05.624935    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:07.626679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 2
	I0819 10:29:07.626696    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:07.626779    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:07.627539    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:07.627582    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:07.627592    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:07.627610    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:07.627619    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:07.627626    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:07.627635    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.627812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 3
	I0819 10:29:09.627828    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:09.627917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:09.628679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:09.628746    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:09.628777    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:09.628791    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:09.628799    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:09.628806    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:09.628812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.722721    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:29:09.722792    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:29:09.722802    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:29:09.745848    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:29:11.630390    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 4
	I0819 10:29:11.630407    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:11.630495    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:11.631275    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:11.631321    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:11.631331    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:11.631340    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:11.631359    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:11.631366    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:11.631387    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:13.633236    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 5
	I0819 10:29:13.633251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.633339    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.634147    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:13.634209    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0819 10:29:13.634221    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:29:13.634228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:29:13.634232    4789 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:29:13.634299    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:13.634943    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635064    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635157    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:29:13.635165    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:29:13.635251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.635310    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.636120    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:29:13.636129    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:29:13.636133    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:29:13.636138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:13.636228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:13.636309    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636392    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636477    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:13.636587    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:13.636755    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:13.636763    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:29:14.697546    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.697558    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:29:14.697564    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.697702    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.697798    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.697887    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.698009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.698168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.698318    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.698326    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:29:14.765778    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:29:14.765827    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:29:14.765833    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:29:14.765839    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.765977    4789 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:29:14.765988    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.766081    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.766185    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.766270    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766369    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766481    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.766635    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.766783    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.766792    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:29:14.841753    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:29:14.841769    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.841901    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.842009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842101    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842195    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.842324    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.842477    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.842489    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:29:14.911764    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.911779    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:29:14.911793    4789 buildroot.go:174] setting up certificates
	I0819 10:29:14.911800    4789 provision.go:84] configureAuth start
	I0819 10:29:14.911807    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.911942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:14.912037    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.912110    4789 provision.go:143] copyHostCerts
	I0819 10:29:14.912141    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912187    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:29:14.912193    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912326    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:29:14.912504    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912534    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:29:14.912539    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912651    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:29:14.912808    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912854    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:29:14.912859    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912935    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:29:14.913083    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:29:15.064390    4789 provision.go:177] copyRemoteCerts
	I0819 10:29:15.064440    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:29:15.064455    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.064599    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.064695    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.064786    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.064886    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:15.103656    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:29:15.103727    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:29:15.123430    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:29:15.123497    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:29:15.143265    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:29:15.143333    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:29:15.162885    4789 provision.go:87] duration metric: took 251.064942ms to configureAuth
	I0819 10:29:15.162900    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:29:15.163052    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:15.163065    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:15.163221    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.163329    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.163417    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163506    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.163693    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.163824    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.163831    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:29:15.225270    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:29:15.225286    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:29:15.225356    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:29:15.225368    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.225510    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.225619    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225708    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225810    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.225948    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.226090    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.226134    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:29:15.299640    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:29:15.299658    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.299797    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.299889    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.299978    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.300067    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.300202    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.300355    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.300368    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:29:16.819930    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:29:16.819945    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:29:16.819953    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetURL
	I0819 10:29:16.820095    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:29:16.820107    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:29:16.820113    4789 client.go:171] duration metric: took 14.154897138s to LocalClient.Create
	I0819 10:29:16.820124    4789 start.go:167] duration metric: took 14.154947877s to libmachine.API.Create "ha-431000"
	I0819 10:29:16.820129    4789 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:29:16.820136    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:29:16.820145    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.820288    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:29:16.820301    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.820396    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.820494    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.820582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.820664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:16.862693    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:29:16.866416    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:29:16.866431    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:29:16.866540    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:29:16.866725    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:29:16.866732    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:29:16.866944    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:29:16.874578    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:29:16.904910    4789 start.go:296] duration metric: took 84.771069ms for postStartSetup
	I0819 10:29:16.904942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:16.905569    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.905740    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:16.906122    4789 start.go:128] duration metric: took 14.273822612s to createHost
	I0819 10:29:16.906138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.906230    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.906303    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906387    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906475    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.906573    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:16.906690    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:16.906697    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:29:16.969389    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088556.958185685
	
	I0819 10:29:16.969401    4789 fix.go:216] guest clock: 1724088556.958185685
	I0819 10:29:16.969406    4789 fix.go:229] Guest: 2024-08-19 10:29:16.958185685 -0700 PDT Remote: 2024-08-19 10:29:16.906131 -0700 PDT m=+127.499217490 (delta=52.054685ms)
	I0819 10:29:16.969416    4789 fix.go:200] guest clock delta is within tolerance: 52.054685ms
	I0819 10:29:16.969419    4789 start.go:83] releasing machines lock for "ha-431000-m03", held for 14.337247496s
	I0819 10:29:16.969437    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.969573    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.992258    4789 out.go:177] * Found network options:
	I0819 10:29:17.014265    4789 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:29:17.037508    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.037542    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.037561    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038432    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038682    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038835    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:29:17.038873    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:29:17.038922    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.038957    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.039067    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:29:17.039087    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:17.039116    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039298    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039332    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039497    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039590    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039679    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039721    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:17.039809    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:29:17.074320    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:29:17.074385    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:29:17.120302    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:29:17.120318    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.120398    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.135851    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:29:17.144402    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:29:17.152735    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.152784    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:29:17.161185    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.169599    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:29:17.177908    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.186319    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:29:17.194967    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:29:17.203702    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:29:17.212228    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:29:17.220632    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:29:17.228164    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:29:17.235717    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.329551    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:29:17.348829    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.348909    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:29:17.363903    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.374976    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:29:17.393061    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.404238    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.414728    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:29:17.438632    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.449143    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.464536    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:29:17.467445    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:29:17.474809    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:29:17.488421    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:29:17.581504    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:29:17.684960    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.684980    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:29:17.699658    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.803979    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:30:18.773891    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.968555005s)
	I0819 10:30:18.774012    4789 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:30:18.808676    4789 out.go:201] 
	W0819 10:30:18.829152    4789 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:29:15 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570013158Z" level=info msg="Starting up"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570447745Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.572542412Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.584880924Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603137975Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603181724Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603219390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603233227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603303033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603338653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603471354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603509282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603521199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603528665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603591360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603811486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605351283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605389063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605504861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605538594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605610859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605677674Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607907354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607976584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607991948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608010711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608023403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608093276Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608724366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608874333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608913351Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608929178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608943960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608968346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609006571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609021660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609032833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609044499Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609055485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609066063Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609088279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609103865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609115537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609130257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609139734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609151164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609161605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609173829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609185591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609200246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609211000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609224200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609237871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609251525Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609296616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609316285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609327369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609362155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609478815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609512436Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609530768Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609541857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609553085Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609563545Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610591556Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610680787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610769049Z" level=info msg="containerd successfully booted in 0.026402s"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.601341697Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.606766805Z" level=info msg="Loading containers: start."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.688780306Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.769433920Z" level=info msg="Loading containers: done."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776749571Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776865122Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.804822251Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.805010917Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:29:16 ha-431000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.814047535Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:29:17 ha-431000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815466623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815881336Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815956644Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.816022765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:18 ha-431000-m03 dockerd[921]: time="2024-08-19T17:29:18.853267859Z" level=info msg="Starting up"
	Aug 19 17:30:18 ha-431000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:30:18.829235    4789 out.go:270] * 
	W0819 10:30:18.830413    4789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:30:18.888275    4789 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:28:07 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3745c7f8fb9ffda1a9528dbab0743afd132acd46a2634643d4b5a24035dc2e4/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/868ee98671e833d733f787480bd37f293c8c6eb8b4092a75c7b96c7993f5f451/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/74fd2f09b011aa0f318ae4259efd3f3d52dc61d0bd78f032481d1a46763eeaae/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.132794637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133043856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133186443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133435141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139175494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139344496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139355701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139421519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157876304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157962624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157975535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.158198941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621287999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621447365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621465217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621560978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6d38fc70c811c9647892071fd07ef2e6455806b20e204cd6583df80c81ba64b7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 19 17:30:23 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040175789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040258993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040272849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040810082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da6e4a61b6cf8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   6d38fc70c811c       busybox-7dff88458-x7m6m
	b9d1bccf00c94       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   74fd2f09b011a       coredns-6f6b679f8f-hr2qx
	e7cacf032435f       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   868ee98671e83       storage-provisioner
	a3891ab602da5       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   c3745c7f8fb9f       coredns-6f6b679f8f-vc76p
	37cd2e9ed2f34       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              14 minutes ago      Running             kindnet-cni               0                   568b6f1ff9aaf       kindnet-lvdbg
	889ab608901bb       ad83b2ca7b09e                                                                                         14 minutes ago      Running             kube-proxy                0                   fde7b27c3d1a5       kube-proxy-5l56s
	ed733554ed160       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     14 minutes ago      Running             kube-vip                  0                   90ec229d87c2c       kube-vip-ha-431000
	11d9cd3b2f49f       1766f54c897f0                                                                                         14 minutes ago      Running             kube-scheduler            0                   4c252909f338f       kube-scheduler-ha-431000
	262471364c991       604f5db92eaa8                                                                                         14 minutes ago      Running             kube-apiserver            0                   5a0fe916eaf1d       kube-apiserver-ha-431000
	39fe08877284d       2e96e5913fc06                                                                                         14 minutes ago      Running             etcd                      0                   fc30d54d1b565       etcd-ha-431000
	2801f8f44773b       045733566833c                                                                                         14 minutes ago      Running             kube-controller-manager   0                   80d21805f230b       kube-controller-manager-ha-431000
	
	
	==> coredns [a3891ab602da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40841 - 35632 "HINFO IN 8043641794425982319.4992720317295253252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008506209s
	[INFO] 10.244.1.2:51889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132717s
	[INFO] 10.244.1.2:37985 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001601417s
	[INFO] 10.244.1.2:55682 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.007910651s
	[INFO] 10.244.0.4:38616 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.000569215s
	[INFO] 10.244.0.4:47772 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000054313s
	[INFO] 10.244.1.2:49768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135774s
	[INFO] 10.244.1.2:55729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00095124s
	[INFO] 10.244.1.2:38602 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000089444s
	[INFO] 10.244.1.2:52875 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099022s
	[INFO] 10.244.1.2:49308 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063848s
	[INFO] 10.244.0.4:57863 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000064923s
	[INFO] 10.244.0.4:40409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096347s
	[INFO] 10.244.1.2:34617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084305s
	[INFO] 10.244.1.2:55843 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058734s
	[INFO] 10.244.0.4:43213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096675s
	[INFO] 10.244.0.4:44050 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031036s
	
	
	==> coredns [b9d1bccf00c9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54195 - 29045 "HINFO IN 6513715404119561949.1799819676960271336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.007921235s
	[INFO] 10.244.1.2:45210 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.055498798s
	[INFO] 10.244.0.4:53730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111076s
	[INFO] 10.244.0.4:51704 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000411643s
	[INFO] 10.244.1.2:54559 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088744s
	[INFO] 10.244.1.2:58642 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064137s
	[INFO] 10.244.1.2:34281 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000845538s
	[INFO] 10.244.0.4:53439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000058375s
	[INFO] 10.244.0.4:33951 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106207s
	[INFO] 10.244.0.4:38202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000034691s
	[INFO] 10.244.0.4:46478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119286s
	[INFO] 10.244.0.4:53704 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053613s
	[INFO] 10.244.0.4:42766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051163s
	[INFO] 10.244.1.2:44413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116167s
	[INFO] 10.244.1.2:58453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067066s
	[INFO] 10.244.0.4:37472 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063597s
	[INFO] 10.244.0.4:59559 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033396s
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:27:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:41:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:28:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7b5b85e2c64405f969f3e24eb671b2e
	  System UUID:                7f844fbb-0000-0000-b5d6-699bdfe1640c
	  Boot ID:                    cb211998-dc9c-4fd5-a169-3f6eeb2403fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x7m6m              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-hr2qx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-vc76p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-431000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-lvdbg                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-431000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-431000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-5l56s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-431000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-431000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-431000 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	
	
	Name:               ha-431000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:28:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:41:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-431000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 21fb6f298fbf435c88fd6e9f9b50e04f
	  System UUID:                decf4e23-0000-0000-95db-084dbcc69753
	  Boot ID:                    330a7904-5229-4d07-9792-de118102386c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2l9lq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-431000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-qmgqd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-431000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-431000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-5h7j2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-431000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-431000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	
	
	==> dmesg <==
	[  +2.712596] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.230971] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.519395] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.106046] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.754357] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.260100] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.108326] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.116397] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.050322] kauditd_printk_skb: 139 callbacks suppressed
	[  +2.370658] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.100232] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.114416] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.133019] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.706453] systemd-fstab-generator[1261]: Ignoring "noauto" option for root device
	[  +0.055873] kauditd_printk_skb: 136 callbacks suppressed
	[  +2.542020] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +4.524199] systemd-fstab-generator[1651]: Ignoring "noauto" option for root device
	[  +0.058523] kauditd_printk_skb: 70 callbacks suppressed
	[  +7.145787] systemd-fstab-generator[2146]: Ignoring "noauto" option for root device
	[  +0.090131] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.001426] kauditd_printk_skb: 35 callbacks suppressed
	[Aug19 17:28] kauditd_printk_skb: 15 callbacks suppressed
	[ +36.695422] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [39fe08877284] <==
	{"level":"info","ts":"2024-08-19T17:28:39.576807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860) learners=(13991592590719088728)"}
	{"level":"info","ts":"2024-08-19T17:28:39.576958Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"c22c1f54a3cc7858","added-peer-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-19T17:28:39.577171Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577230Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577486Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577607Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577632Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858","remote-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-19T17:28:39.577678Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577764Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577976Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.578023Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.582369Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T17:28:40.582407Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.582418Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.596476Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.597370Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T17:28:40.597585Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.605913Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:41.107824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860 13991592590719088728)"}
	{"level":"info","ts":"2024-08-19T17:28:41.107895Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-19T17:28:41.107911Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:32:31.484329Z","caller":"traceutil/trace.go:171","msg":"trace[1768622606] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"105.97642ms","start":"2024-08-19T17:32:31.378330Z","end":"2024-08-19T17:32:31.484306Z","steps":["trace[1768622606] 'process raft request'  (duration: 69.010204ms)","trace[1768622606] 'compare'  (duration: 36.887791ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:37:40.726136Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1233}
	{"level":"info","ts":"2024-08-19T17:37:40.747676Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1233,"took":"20.998439ms","hash":1199177849,"current-db-size-bytes":3051520,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-19T17:37:40.747929Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1199177849,"revision":1233,"compact-revision":-1}
	
	
	==> kernel <==
	 17:42:01 up 14 min,  0 users,  load average: 0.04, 0.13, 0.09
	Linux ha-431000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [37cd2e9ed2f3] <==
	I0819 17:40:53.913656       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:41:03.913585       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:41:03.913669       1 main.go:299] handling current node
	I0819 17:41:03.913691       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:41:03.913704       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:41:13.921879       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:41:13.922055       1 main.go:299] handling current node
	I0819 17:41:13.922135       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:41:13.922216       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:41:23.922941       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:41:23.923249       1 main.go:299] handling current node
	I0819 17:41:23.923348       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:41:23.923383       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:41:33.918589       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:41:33.918730       1 main.go:299] handling current node
	I0819 17:41:33.918774       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:41:33.918810       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:41:43.921725       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:41:43.921764       1 main.go:299] handling current node
	I0819 17:41:43.921776       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:41:43.921781       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:41:53.913960       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:41:53.914062       1 main.go:299] handling current node
	I0819 17:41:53.914082       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:41:53.914091       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [262471364c99] <==
	I0819 17:27:41.910619       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 17:27:41.911091       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 17:27:41.912463       1 controller.go:615] quota admission added evaluator for: namespaces
	I0819 17:27:41.974665       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 17:27:42.843862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0819 17:27:42.851035       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0819 17:27:42.851176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 17:27:43.131229       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 17:27:43.156609       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 17:27:43.228677       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 17:27:43.232630       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0819 17:27:43.233263       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:27:43.235521       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 17:27:43.816793       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 17:27:45.642805       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 17:27:45.648554       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 17:27:45.656204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 17:27:49.372173       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 17:27:49.521616       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0819 17:41:58.471372       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51273: use of closed network connection
	E0819 17:41:58.792809       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51278: use of closed network connection
	E0819 17:41:58.976708       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51280: use of closed network connection
	E0819 17:41:59.288867       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51285: use of closed network connection
	E0819 17:41:59.474614       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51287: use of closed network connection
	E0819 17:41:59.785950       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51292: use of closed network connection
	
	
	==> kube-controller-manager [2801f8f44773] <==
	I0819 17:28:46.812463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:46.910622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:49.488441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:58.619481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:58.630217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:29:01.828992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:29:09.962018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:30:22.272615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.179354ms"
	I0819 17:30:22.288765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.8458ms"
	I0819 17:30:22.344803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="55.991929ms"
	I0819 17:30:22.374182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.188136ms"
	I0819 17:30:22.381153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.695075ms"
	I0819 17:30:22.382352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.585µs"
	I0819 17:30:22.399951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.69495ms"
	I0819 17:30:22.400210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="24.929µs"
	I0819 17:30:24.244155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.898617ms"
	I0819 17:30:24.244396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.117µs"
	I0819 17:30:24.566063       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.881458ms"
	I0819 17:30:24.566244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.693µs"
	I0819 17:30:41.556624       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:30:49.673928       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000"
	I0819 17:35:47.271228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:35:55.416754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000"
	I0819 17:40:53.216070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:41:01.735584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000"
	
	
	==> kube-proxy [889ab608901b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:27:50.162614       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:27:50.171417       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0819 17:27:50.171450       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:27:50.239161       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:27:50.239202       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:27:50.239220       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:27:50.242102       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:27:50.242306       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:27:50.242335       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:27:50.253458       1 config.go:197] "Starting service config controller"
	I0819 17:27:50.253497       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:27:50.253518       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:27:50.253542       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:27:50.253889       1 config.go:326] "Starting node config controller"
	I0819 17:27:50.253915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:27:50.354735       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:27:50.354788       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:27:50.354817       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11d9cd3b2f49] <==
	W0819 17:27:41.846154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 17:27:41.846286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:41.846418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:27:41.846569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.722533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 17:27:42.722591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.808762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 17:27:42.808891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.853276       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:27:42.853353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.858509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:27:42.858619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.867998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:27:42.868077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.900445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:27:42.900541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.970545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:27:42.970765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:43.004003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:27:43.004103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:27:43.339820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:30:22.272037       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	E0819 17:30:22.273195       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e37fe27d-f1bf-427d-a76d-96722b0c74a1(default/busybox-7dff88458-x7m6m) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x7m6m"
	E0819 17:30:22.273433       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" pod="default/busybox-7dff88458-x7m6m"
	I0819 17:30:22.273582       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	
	
	==> kubelet <==
	Aug 19 17:37:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:37:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:37:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:37:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:38:45 ha-431000 kubelet[2153]: E0819 17:38:45.527347    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:38:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:39:45 ha-431000 kubelet[2153]: E0819 17:39:45.526214    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:39:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:40:45 ha-431000 kubelet[2153]: E0819 17:40:45.529172    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:40:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:41:45 ha-431000 kubelet[2153]: E0819 17:41:45.526920    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:41:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:41:59 ha-431000 kubelet[2153]: E0819 17:41:59.290192    2153 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49834->127.0.0.1:35619: write tcp 127.0.0.1:49834->127.0.0.1:35619: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-wfcpq
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-431000 describe pod busybox-7dff88458-wfcpq
helpers_test.go:282: (dbg) kubectl --context ha-431000 describe pod busybox-7dff88458-wfcpq:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-wfcpq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t489x (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t489x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  11m                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  78s (x2 over 6m18s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  76s (x3 over 11m)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (700.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-2l9lq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-2l9lq -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-wfcpq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-wfcpq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (122.658327ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-wfcpq does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7dff88458-wfcpq could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-x7m6m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-7dff88458-x7m6m -- sh -c "ping -c 1 192.169.0.1"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (2.083681345s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT |                     |
	|         | busybox-7dff88458-wfcpq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:27:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:27:09.441458    4789 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:27:09.441716    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441721    4789 out.go:358] Setting ErrFile to fd 2...
	I0819 10:27:09.441725    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441914    4789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:27:09.443405    4789 out.go:352] Setting JSON to false
	I0819 10:27:09.468451    4789 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3399,"bootTime":1724085030,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:27:09.468547    4789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:27:09.554597    4789 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:27:09.577770    4789 notify.go:220] Checking for updates...
	I0819 10:27:09.609734    4789 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:27:09.676944    4789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:09.699980    4789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:27:09.722951    4789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:27:09.744804    4789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.765726    4789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:27:09.787204    4789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:27:09.817679    4789 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 10:27:09.859821    4789 start.go:297] selected driver: hyperkit
	I0819 10:27:09.859849    4789 start.go:901] validating driver "hyperkit" against <nil>
	I0819 10:27:09.859893    4789 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:27:09.864287    4789 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.864395    4789 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:27:09.872759    4789 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:27:09.876743    4789 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.876768    4789 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:27:09.876803    4789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:27:09.877011    4789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:27:09.877072    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:09.877082    4789 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 10:27:09.877094    4789 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:27:09.877164    4789 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:09.877251    4789 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.919755    4789 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:27:09.940604    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:09.940675    4789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:27:09.940720    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:09.940918    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:09.940931    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:09.941271    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:09.941299    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json: {Name:mkf9dcbb24d8b9fbe62d81f81a7a87fec457d2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:09.941835    4789 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:09.941963    4789 start.go:364] duration metric: took 95.166µs to acquireMachinesLock for "ha-431000"
	I0819 10:27:09.941997    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:09.942082    4789 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 10:27:09.963791    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:09.964075    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.964148    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:09.974068    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51111
	I0819 10:27:09.974512    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:09.974919    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:09.974932    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:09.975172    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:09.975283    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:09.975374    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:09.975471    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:09.975492    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:09.975527    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:09.975578    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975594    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975657    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:09.975695    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975707    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975719    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:09.975729    4789 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0819 10:27:09.975800    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.975970    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:09.976388    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:09.976397    4789 main.go:141] libmachine: (ha-431000) Calling .Create
	I0819 10:27:09.976462    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.976580    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:09.976459    4799 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.976633    4789 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:10.160305    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.160220    4799 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa...
	I0819 10:27:10.258779    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.258678    4799 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0819 10:27:10.258792    4789 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0819 10:27:10.258800    4789 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0819 10:27:10.259681    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.259588    4799 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000 ...
	I0819 10:27:10.634434    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.634476    4789 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:27:10.634529    4789 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:27:10.744945    4789 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:27:10.744966    4789 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:10.744993    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745030    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745065    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:10.745094    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:10.745118    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:10.748020    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Pid is 4802
	I0819 10:27:10.748404    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:27:10.748413    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.748494    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:10.749357    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:10.749398    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:10.749412    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:10.749423    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:10.749431    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:10.755634    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:10.806699    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:10.807300    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:10.807314    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:10.807322    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:10.807335    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.184562    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:11.184575    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:11.299194    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:11.299213    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:11.299228    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:11.299236    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.300075    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:11.300086    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:12.750038    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0819 10:27:12.750054    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:12.750189    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:12.750969    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:12.751019    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:12.751030    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:12.751039    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:12.751052    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:14.752158    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0819 10:27:14.752174    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:14.752264    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:14.753040    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:14.753090    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:14.753102    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:14.753111    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:14.753117    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.754325    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0819 10:27:16.754340    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:16.754402    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:16.755326    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:16.755347    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:16.755354    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:16.755373    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:16.755390    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.856153    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:27:16.856252    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:27:16.856262    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:27:16.880804    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:27:18.757489    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0819 10:27:18.757504    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:18.757601    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:18.758394    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:18.758435    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:18.758449    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:18.758481    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:18.758495    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:20.758927    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0819 10:27:20.758946    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.759035    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.759848    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:20.759873    4789 main.go:141] libmachine: (ha-431000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:20.759888    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:20.759901    4789 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:27:20.759913    4789 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:27:20.759952    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:20.760523    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760634    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760741    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:27:20.760753    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:20.760839    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.760885    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.761678    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:27:20.761690    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:27:20.761696    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:27:20.761702    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:20.761795    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:20.761883    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.761969    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.762060    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:20.762168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:20.762361    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:20.762369    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:27:21.818394    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.818406    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:27:21.818419    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.818554    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.818654    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818747    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818841    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.818981    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.819131    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.819139    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:27:21.870784    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:27:21.870826    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:27:21.870831    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:27:21.870837    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.870976    4789 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:27:21.870986    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.871077    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.871169    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.871272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871352    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871452    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.871577    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.871711    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.871719    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:27:21.937676    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:27:21.937694    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.937826    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.937927    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938017    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938112    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.938245    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.938391    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.938402    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:27:21.996654    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.996676    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:27:21.996692    4789 buildroot.go:174] setting up certificates
	I0819 10:27:21.996701    4789 provision.go:84] configureAuth start
	I0819 10:27:21.996714    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.996873    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:21.996990    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.997094    4789 provision.go:143] copyHostCerts
	I0819 10:27:21.997133    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997201    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:27:21.997209    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997337    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:27:21.997534    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997567    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:27:21.997572    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997714    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:27:21.997882    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.997926    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:27:21.997941    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.998049    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:27:21.998203    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:27:22.044837    4789 provision.go:177] copyRemoteCerts
	I0819 10:27:22.044896    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:27:22.044908    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.045021    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.045107    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.045191    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.045288    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:22.078701    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:27:22.078779    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:27:22.098027    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:27:22.098092    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 10:27:22.117169    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:27:22.117235    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:27:22.137411    4789 provision.go:87] duration metric: took 140.68689ms to configureAuth
	I0819 10:27:22.137424    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:27:22.137558    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:22.137574    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:22.137700    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.137783    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.137859    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.137942    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.138028    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.138134    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.138266    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.138274    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:27:22.191384    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:27:22.191397    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:27:22.191469    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:27:22.191481    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.191636    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.191724    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191834    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191924    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.192051    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.192193    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.192236    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:27:22.256138    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:27:22.256165    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.256301    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.256391    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256475    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256578    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.256695    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.256839    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.256851    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:27:23.816844    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:27:23.816860    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:27:23.816871    4789 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0819 10:27:23.817008    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:27:23.817016    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:27:23.817020    4789 client.go:171] duration metric: took 13.841219093s to LocalClient.Create
	I0819 10:27:23.817036    4789 start.go:167] duration metric: took 13.84126124s to libmachine.API.Create "ha-431000"
	I0819 10:27:23.817044    4789 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:27:23.817051    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:27:23.817063    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.817219    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:27:23.817232    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.817321    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.817402    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.817497    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.817595    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.852993    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:27:23.857771    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:27:23.857792    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:27:23.857909    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:27:23.858094    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:27:23.858100    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:27:23.858323    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:27:23.868639    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:23.894485    4789 start.go:296] duration metric: took 77.430316ms for postStartSetup
	I0819 10:27:23.894509    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:23.895099    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.895256    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:23.895585    4789 start.go:128] duration metric: took 13.953185373s to createHost
	I0819 10:27:23.895598    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.895691    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.895790    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895879    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895966    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.896069    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:23.896228    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:23.896236    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:27:23.956133    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088443.744394113
	
	I0819 10:27:23.956145    4789 fix.go:216] guest clock: 1724088443.744394113
	I0819 10:27:23.956151    4789 fix.go:229] Guest: 2024-08-19 10:27:23.744394113 -0700 PDT Remote: 2024-08-19 10:27:23.895593 -0700 PDT m=+14.491162031 (delta=-151.198887ms)
	I0819 10:27:23.956169    4789 fix.go:200] guest clock delta is within tolerance: -151.198887ms
	I0819 10:27:23.956173    4789 start.go:83] releasing machines lock for "ha-431000", held for 14.013893151s
	I0819 10:27:23.956192    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956322    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.956416    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956749    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956860    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956951    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:27:23.956980    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957023    4789 ssh_runner.go:195] Run: cat /version.json
	I0819 10:27:23.957036    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957073    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957109    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957170    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957184    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957292    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957350    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.957384    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:24.032926    4789 ssh_runner.go:195] Run: systemctl --version
	I0819 10:27:24.037723    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:27:24.041939    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:27:24.041985    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:27:24.055424    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:27:24.055435    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.055529    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.070257    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:27:24.079169    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:27:24.088264    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.088319    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:27:24.097172    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.105902    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:27:24.114585    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.123406    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:27:24.132626    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:27:24.141378    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:27:24.150490    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:27:24.158980    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:27:24.167068    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:27:24.175030    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.269460    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:27:24.289328    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.289405    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:27:24.304907    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.317291    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:27:24.330289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.340851    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.351456    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:27:24.376914    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.387402    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.402522    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:27:24.405426    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:27:24.412799    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:27:24.426019    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:27:24.528550    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:27:24.636829    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.636893    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:27:24.652027    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.753641    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:27.037286    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.283575266s)
	I0819 10:27:27.037346    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:27:27.047775    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:27:27.062961    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.074027    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:27:27.172330    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:27:27.284593    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.395779    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:27:27.409552    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.420868    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.532356    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:27:27.591558    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:27:27.591636    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:27:27.595967    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:27:27.596013    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:27:27.599275    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:27:27.625101    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:27:27.625173    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.642636    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.693299    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:27:27.693355    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:27.693783    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:27:27.698129    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:27.708916    4789 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:27:27.708982    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:27.709038    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:27.721971    4789 docker.go:685] Got preloaded images: 
	I0819 10:27:27.721984    4789 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0819 10:27:27.722034    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:27.730353    4789 ssh_runner.go:195] Run: which lz4
	I0819 10:27:27.733218    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 10:27:27.733323    4789 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 10:27:27.736425    4789 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 10:27:27.736445    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0819 10:27:28.750864    4789 docker.go:649] duration metric: took 1.017557348s to copy over tarball
	I0819 10:27:28.750956    4789 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 10:27:31.074672    4789 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.323648699s)
	I0819 10:27:31.074688    4789 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 10:27:31.100633    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:31.109680    4789 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0819 10:27:31.123335    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:31.234501    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:33.578614    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.344043512s)
	I0819 10:27:33.578701    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:33.592021    4789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 10:27:33.592040    4789 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:27:33.592048    4789 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:27:33.592132    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:27:33.592198    4789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:27:33.629283    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:33.629295    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:33.629309    4789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:27:33.629329    4789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:27:33.629424    4789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:27:33.629439    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:27:33.629491    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:27:33.642904    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:27:33.642969    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:27:33.643018    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:27:33.652008    4789 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:27:33.652070    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:27:33.660066    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:27:33.673571    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:27:33.686700    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:27:33.700085    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0819 10:27:33.713804    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:27:33.716661    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:33.726684    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:33.822205    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:27:33.836833    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:27:33.836844    4789 certs.go:194] generating shared ca certs ...
	I0819 10:27:33.836855    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.837051    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:27:33.837132    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:27:33.837142    4789 certs.go:256] generating profile certs ...
	I0819 10:27:33.837189    4789 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:27:33.837203    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0819 10:27:33.888319    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt ...
	I0819 10:27:33.888333    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt: {Name:mk2ecc34873277fbe11bf267ec0d97684e18e84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888666    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key ...
	I0819 10:27:33.888675    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key: {Name:mk51abee214c838f4621902241303fe73ba93aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888900    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e
	I0819 10:27:33.888915    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0819 10:27:34.060027    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e ...
	I0819 10:27:34.060046    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e: {Name:mk108eb9cf88ab2aae15883e4a3724751adb3118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060347    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e ...
	I0819 10:27:34.060356    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e: {Name:mk8fae11cce9c9a45d3e151953d1ee9ab2cc82d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060557    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:27:34.060759    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:27:34.060929    4789 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:27:34.060943    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0819 10:27:34.243675    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0819 10:27:34.243690    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mkeb1eac7ee8b3901067565b7ff883710f2d1088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244061    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key ...
	I0819 10:27:34.244069    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key: {Name:mkc1afcd7a6a9a572716155e33c32e7def81650b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244312    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:27:34.244340    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:27:34.244378    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:27:34.244398    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:27:34.244416    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:27:34.244448    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:27:34.244486    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:27:34.244521    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:27:34.244615    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:27:34.244666    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:27:34.244675    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:27:34.244748    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:27:34.244776    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:27:34.244831    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:27:34.244909    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:34.244942    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.244990    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.245007    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.245522    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:27:34.267677    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:27:34.287348    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:27:34.309971    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:27:34.330910    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 10:27:34.350036    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:27:34.370663    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:27:34.390457    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:27:34.410226    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:27:34.431025    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:27:34.451232    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:27:34.471133    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:27:34.487758    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:27:34.493769    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:27:34.506308    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511941    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511996    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.519851    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:27:34.531120    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:27:34.540803    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544302    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544341    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.548724    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:27:34.558817    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:27:34.568088    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571692    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571731    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.575999    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:27:34.585057    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:27:34.588207    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:27:34.588251    4789 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:34.588345    4789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:27:34.601241    4789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:27:34.609838    4789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:27:34.618794    4789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:27:34.627200    4789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:27:34.627208    4789 kubeadm.go:157] found existing configuration files:
	
	I0819 10:27:34.627243    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:27:34.635162    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:27:34.635198    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:27:34.643336    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:27:34.651247    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:27:34.651280    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:27:34.659346    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.667240    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:27:34.667281    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.675386    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:27:34.684053    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:27:34.684105    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:27:34.692357    4789 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 10:27:34.751991    4789 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:27:34.752160    4789 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:27:34.833970    4789 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:27:34.834062    4789 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:27:34.834153    4789 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:27:34.842513    4789 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:27:34.863067    4789 out.go:235]   - Generating certificates and keys ...
	I0819 10:27:34.863126    4789 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:27:34.863179    4789 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:27:35.003012    4789 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:27:35.766829    4789 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:27:35.976153    4789 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:27:36.134850    4789 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:27:36.228947    4789 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:27:36.229166    4789 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.375842    4789 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:27:36.375934    4789 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.597289    4789 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:27:36.907219    4789 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:27:37.426404    4789 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:27:37.426585    4789 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:27:37.566387    4789 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:27:38.000620    4789 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:27:38.121335    4789 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:27:38.179042    4789 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:27:38.231270    4789 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:27:38.231752    4789 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:27:38.233818    4789 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:27:38.255454    4789 out.go:235]   - Booting up control plane ...
	I0819 10:27:38.255535    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:27:38.255605    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:27:38.255655    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:27:38.255734    4789 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:27:38.255809    4789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:27:38.255842    4789 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:27:38.364951    4789 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:27:38.365069    4789 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:27:39.366309    4789 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001984632s
	I0819 10:27:39.366388    4789 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:27:45.029099    4789 kubeadm.go:310] [api-check] The API server is healthy after 5.666724975s
	I0819 10:27:45.039440    4789 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:27:45.046481    4789 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:27:45.059797    4789 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:27:45.059959    4789 kubeadm.go:310] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:27:45.067482    4789 kubeadm.go:310] [bootstrap-token] Using token: rrr6yu.ivgebthw63l7ehzv
	I0819 10:27:45.106820    4789 out.go:235]   - Configuring RBAC rules ...
	I0819 10:27:45.107004    4789 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:27:45.110638    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:27:45.151902    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:27:45.154406    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:27:45.156223    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:27:45.158190    4789 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:27:45.434935    4789 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:27:45.846068    4789 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:27:46.434136    4789 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:27:46.434675    4789 kubeadm.go:310] 
	I0819 10:27:46.434724    4789 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:27:46.434728    4789 kubeadm.go:310] 
	I0819 10:27:46.434798    4789 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:27:46.434808    4789 kubeadm.go:310] 
	I0819 10:27:46.434829    4789 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:27:46.434881    4789 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:27:46.434925    4789 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:27:46.434930    4789 kubeadm.go:310] 
	I0819 10:27:46.434974    4789 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:27:46.434984    4789 kubeadm.go:310] 
	I0819 10:27:46.435035    4789 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:27:46.435041    4789 kubeadm.go:310] 
	I0819 10:27:46.435080    4789 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:27:46.435139    4789 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:27:46.435197    4789 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:27:46.435204    4789 kubeadm.go:310] 
	I0819 10:27:46.435268    4789 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:27:46.435333    4789 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:27:46.435337    4789 kubeadm.go:310] 
	I0819 10:27:46.435410    4789 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435498    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd \
	I0819 10:27:46.435515    4789 kubeadm.go:310] 	--control-plane 
	I0819 10:27:46.435520    4789 kubeadm.go:310] 
	I0819 10:27:46.435589    4789 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:27:46.435594    4789 kubeadm.go:310] 
	I0819 10:27:46.435664    4789 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435746    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd 
	I0819 10:27:46.435997    4789 kubeadm.go:310] W0819 17:27:34.545490    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436229    4789 kubeadm.go:310] W0819 17:27:34.546600    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436316    4789 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:27:46.436331    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:46.436337    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:46.458203    4789 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 10:27:46.517773    4789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 10:27:46.523858    4789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 10:27:46.523872    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 10:27:46.539513    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 10:27:46.759807    4789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:27:46.759878    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:46.759883    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0819 10:27:46.777623    4789 ops.go:34] apiserver oom_adj: -16
	I0819 10:27:46.926523    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.427175    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.927281    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.428033    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.926686    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.426608    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.926666    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:50.010199    4789 kubeadm.go:1113] duration metric: took 3.25030545s to wait for elevateKubeSystemPrivileges
	I0819 10:27:50.010216    4789 kubeadm.go:394] duration metric: took 15.42163041s to StartCluster
	I0819 10:27:50.010227    4789 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.010325    4789 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.010762    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.011021    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:27:50.011033    4789 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.011050    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:27:50.011076    4789 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:27:50.011116    4789 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0819 10:27:50.011120    4789 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0819 10:27:50.011148    4789 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0819 10:27:50.011152    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.011155    4789 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0819 10:27:50.011186    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.011415    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011420    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011430    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.011431    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.020667    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51134
	I0819 10:27:50.021171    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021230    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51136
	I0819 10:27:50.021523    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021533    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.021634    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021753    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.021940    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.022115    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.022146    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.022229    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.022806    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.022988    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.023051    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.024924    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.025156    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:27:50.025529    4789 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:27:50.025699    4789 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0819 10:27:50.025720    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.025937    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.025963    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.031229    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51138
	I0819 10:27:50.031604    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.031942    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.031953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.032154    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.032270    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.032358    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.032435    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.033436    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.034958    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51140
	I0819 10:27:50.035269    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.035586    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.035596    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.035796    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.036148    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.036165    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.044937    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51142
	I0819 10:27:50.045312    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.045667    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.045680    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.045893    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.045996    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.046077    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.046151    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.047102    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.047225    4789 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.047234    4789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:27:50.047243    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.047325    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.047417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.047495    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.047571    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.056055    4789 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:27:50.076134    4789 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.076146    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:27:50.076163    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.076310    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.076417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.076556    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.076664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.113554    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:27:50.127003    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.262022    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.488277    4789 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0819 10:27:50.488318    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488327    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488534    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488547    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488556    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488563    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488564    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488681    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488704    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488718    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488767    4789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:27:50.488780    4789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:27:50.488862    4789 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 10:27:50.488867    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.488877    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.488882    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495057    4789 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:27:50.495477    4789 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 10:27:50.495484    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.495490    4789 round_trippers.go:473]     Content-Type: application/json
	I0819 10:27:50.495494    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.498504    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:27:50.498632    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.498641    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.498797    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.498806    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.498814    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649595    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649607    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.649833    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.649843    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649848    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.649874    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649893    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.650019    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.650028    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.650044    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.673040    4789 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 10:27:50.709732    4789 addons.go:510] duration metric: took 698.654107ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 10:27:50.709774    4789 start.go:246] waiting for cluster config update ...
	I0819 10:27:50.709799    4789 start.go:255] writing updated cluster config ...
	I0819 10:27:50.746763    4789 out.go:201] 
	I0819 10:27:50.768467    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.768565    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.790908    4789 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:27:50.832651    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:50.832673    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:50.832790    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:50.832801    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:50.832852    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.833261    4789 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:50.833314    4789 start.go:364] duration metric: took 41.162µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:27:50.833329    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.833382    4789 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0819 10:27:50.854688    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:50.854833    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.854870    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.864309    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51147
	I0819 10:27:50.864640    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.864951    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.864963    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.865175    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.865294    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:27:50.865374    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:27:50.865472    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:50.865485    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:50.865515    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:50.865553    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865565    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865607    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:50.865634    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865649    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865666    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:50.865676    4789 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0819 10:27:50.865754    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.865776    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:27:50.891966    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:50.891987    4789 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0819 10:27:50.892145    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.892330    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:50.892137    4845 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:50.892421    4789 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:51.078705    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.078630    4845 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa...
	I0819 10:27:51.171843    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.171751    4845 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0819 10:27:51.171860    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0819 10:27:51.171868    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0819 10:27:51.172685    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.172591    4845 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02 ...
	I0819 10:27:51.544884    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.544910    4789 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:27:51.544922    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:27:51.571631    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:27:51.571653    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:51.571680    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571706    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571739    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:51.571767    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:51.571780    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:51.574668    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Pid is 4850
	I0819 10:27:51.575734    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:27:51.575757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.575783    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:51.576702    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:51.576759    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:51.576778    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:51.576816    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:51.576830    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:51.576844    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:51.582262    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:51.590515    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:51.591362    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:51.591388    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:51.591397    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:51.591407    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:51.978930    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:51.978947    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:52.094059    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:52.094091    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:52.094127    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:52.094142    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:52.094869    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:52.094879    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:53.577521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0819 10:27:53.577541    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:53.577636    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:53.578446    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:53.578461    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:53.578472    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:53.578481    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:53.578489    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:53.578507    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:55.579485    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0819 10:27:55.579501    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:55.579576    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:55.580358    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:55.580387    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:55.580414    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:55.580426    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:55.580434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:55.580442    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.581588    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0819 10:27:57.581603    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:57.581681    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:57.582486    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:57.582510    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:57.582521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:57.582530    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:57.582540    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:57.582548    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.680321    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:27:57.680434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:27:57.680445    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:27:57.704982    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:27:59.583757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0819 10:27:59.583772    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:59.583842    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:59.584652    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:59.584696    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:59.584710    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:59.584720    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:59.584729    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:59.584737    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:28:01.585137    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0819 10:28:01.585154    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.585235    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.585996    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:28:01.586042    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:28:01.586055    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:28:01.586080    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:28:01.586086    4789 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:28:01.586098    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:01.586694    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586794    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586889    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:28:01.586896    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:28:01.586980    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.587029    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.587790    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:28:01.587796    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:28:01.587800    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:28:01.587804    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:01.587881    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:01.587956    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588060    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588138    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:01.588256    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:01.588435    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:01.588443    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:28:02.645180    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.645193    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:28:02.645198    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.645326    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.645422    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645501    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645583    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.645718    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.645869    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.645877    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:28:02.700961    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:28:02.700992    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:28:02.700998    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:28:02.701003    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701132    4789 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:28:02.701143    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701237    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.701327    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.701424    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701502    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701588    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.701720    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.701855    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.701864    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:28:02.773500    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:28:02.773515    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.773649    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.773737    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773840    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773945    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.774071    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.774226    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.774237    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:28:02.838956    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.838971    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:28:02.838984    4789 buildroot.go:174] setting up certificates
	I0819 10:28:02.838992    4789 provision.go:84] configureAuth start
	I0819 10:28:02.838998    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.839135    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:02.839223    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.839322    4789 provision.go:143] copyHostCerts
	I0819 10:28:02.839347    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839393    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:28:02.839399    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839532    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:28:02.839738    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839769    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:28:02.839774    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839845    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:28:02.839992    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840021    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:28:02.840025    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840090    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:28:02.840244    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:28:02.878856    4789 provision.go:177] copyRemoteCerts
	I0819 10:28:02.878899    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:28:02.878912    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.879041    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.879132    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.879231    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.879330    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:02.914748    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:28:02.914819    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:28:02.934608    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:28:02.934673    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:28:02.954833    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:28:02.954900    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:28:02.974652    4789 provision.go:87] duration metric: took 135.649275ms to configureAuth
	I0819 10:28:02.974666    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:28:02.974809    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:02.974823    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:02.974958    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.975063    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.975147    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975219    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975328    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.975454    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.975601    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.975609    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:28:03.033628    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:28:03.033639    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:28:03.033715    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:28:03.033730    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.033861    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.033950    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034053    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034140    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.034264    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.034412    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.034459    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:28:03.102644    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:28:03.102663    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.102811    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.102898    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.102999    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.103120    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.103244    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.103390    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.103404    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:28:04.637367    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:28:04.637381    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:28:04.637388    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0819 10:28:04.637524    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:28:04.637530    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:28:04.637534    4789 client.go:171] duration metric: took 13.771742286s to LocalClient.Create
	I0819 10:28:04.637544    4789 start.go:167] duration metric: took 13.771771513s to libmachine.API.Create "ha-431000"
	I0819 10:28:04.637550    4789 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:28:04.637557    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:28:04.637566    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.637712    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:28:04.637723    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.637834    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.637926    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.638026    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.638127    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.678475    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:28:04.682965    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:28:04.682980    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:28:04.683079    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:28:04.683246    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:28:04.683253    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:28:04.683434    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:28:04.695086    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:04.723279    4789 start.go:296] duration metric: took 85.720185ms for postStartSetup
	I0819 10:28:04.723311    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:04.723943    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.724123    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:28:04.724446    4789 start.go:128] duration metric: took 13.890752069s to createHost
	I0819 10:28:04.724460    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.724558    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.724679    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724786    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724871    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.724979    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:04.725097    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:04.725103    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:28:04.784682    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088484.852271103
	
	I0819 10:28:04.784694    4789 fix.go:216] guest clock: 1724088484.852271103
	I0819 10:28:04.784698    4789 fix.go:229] Guest: 2024-08-19 10:28:04.852271103 -0700 PDT Remote: 2024-08-19 10:28:04.724454 -0700 PDT m=+55.319126445 (delta=127.817103ms)
	I0819 10:28:04.784725    4789 fix.go:200] guest clock delta is within tolerance: 127.817103ms
	I0819 10:28:04.784731    4789 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.951104834s
	I0819 10:28:04.784750    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.784884    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.807240    4789 out.go:177] * Found network options:
	I0819 10:28:04.829600    4789 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:28:04.851548    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.851607    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852495    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852747    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852876    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:28:04.852915    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:28:04.852962    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.853080    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:28:04.853100    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.853127    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853372    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853394    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853596    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853633    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853742    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853804    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.853880    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:28:04.886788    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:28:04.886847    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:28:04.931189    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:28:04.931209    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:04.931315    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:04.947443    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:28:04.955693    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:28:04.964155    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:28:04.964197    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:28:04.972493    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.980548    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:28:04.988709    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.996856    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:28:05.005271    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:28:05.013575    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:28:05.021801    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:28:05.030285    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:28:05.037842    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:28:05.045332    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.140730    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:28:05.159555    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:05.159625    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:28:05.177222    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.189624    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:28:05.203743    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.214606    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.224836    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:28:05.249649    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.261132    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:05.276191    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:28:05.279129    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:28:05.287175    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:28:05.300748    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:28:05.396444    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:28:05.505778    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:28:05.505805    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:28:05.520914    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.616215    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:28:07.911303    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.295016426s)
	I0819 10:28:07.911366    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:28:07.923467    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:28:07.938312    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:07.949283    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:28:08.046922    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:28:08.152880    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.256594    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:28:08.271339    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:08.283089    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.384798    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:28:08.441813    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:28:08.441881    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:28:08.446421    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:28:08.446473    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:28:08.449807    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:28:08.479621    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:28:08.479690    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.496571    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.537488    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:28:08.579078    4789 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:28:08.603340    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:08.603786    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:28:08.608372    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:08.618166    4789 mustload.go:65] Loading cluster: ha-431000
	I0819 10:28:08.618314    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:08.618533    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.618549    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.627122    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51170
	I0819 10:28:08.627459    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.627845    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.627857    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.628097    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.628239    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:28:08.628342    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:08.628430    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:28:08.629353    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:08.629592    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.629608    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.638041    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51172
	I0819 10:28:08.638388    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.638753    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.638770    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.638992    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.639108    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:08.639209    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:28:08.639216    4789 certs.go:194] generating shared ca certs ...
	I0819 10:28:08.639225    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.639357    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:28:08.639425    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:28:08.639434    4789 certs.go:256] generating profile certs ...
	I0819 10:28:08.639538    4789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:28:08.639562    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788
	I0819 10:28:08.639575    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0819 10:28:08.693749    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 ...
	I0819 10:28:08.693766    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788: {Name:mkade16cb35e521e9e55fc42d7cb129c8b94b782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694149    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 ...
	I0819 10:28:08.694160    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788: {Name:mkeae0a28d48da45f84299952289f15db5f944f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694378    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:28:08.694703    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:28:08.694954    4789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:28:08.694964    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:28:08.694987    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:28:08.695006    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:28:08.695024    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:28:08.695042    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:28:08.695060    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:28:08.695078    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:28:08.695096    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:28:08.695175    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:28:08.695213    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:28:08.695228    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:28:08.695261    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:28:08.695290    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:28:08.695321    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:28:08.695400    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:08.695438    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:28:08.695462    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:28:08.695482    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:08.695511    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:08.695664    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:08.695745    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:08.695845    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:08.695925    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:08.729193    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:28:08.736181    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:28:08.748665    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:28:08.751826    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:28:08.773481    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:28:08.777252    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:28:08.787546    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:28:08.791015    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:28:08.800105    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:28:08.803218    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:28:08.812240    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:28:08.815351    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:28:08.824083    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:28:08.844052    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:28:08.864107    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:28:08.884612    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:28:08.904284    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 10:28:08.924397    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:28:08.944026    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:28:08.964689    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:28:08.984934    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:28:09.004413    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:28:09.024043    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:28:09.043924    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:28:09.058066    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:28:09.071585    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:28:09.085080    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:28:09.098536    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:28:09.112048    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:28:09.125242    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:28:09.139717    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:28:09.144032    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:28:09.152602    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.155967    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.156009    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.160192    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:28:09.168568    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:28:09.176997    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180533    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180568    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.184799    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:28:09.193356    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:28:09.201811    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205453    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205494    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.209760    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:28:09.218392    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:28:09.222392    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:28:09.222437    4789 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:28:09.222498    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:28:09.222516    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:28:09.222559    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:28:09.234408    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:28:09.234452    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:28:09.234506    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.242939    4789 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:28:09.242994    4789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0819 10:28:09.251336    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 10:28:11.797289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:28:11.809069    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.809192    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.812267    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:28:11.812291    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:28:12.469259    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.469340    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.472845    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:28:12.472869    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:28:13.348737    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.348820    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.352429    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:28:13.352449    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:28:13.542994    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:28:13.550937    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:28:13.564187    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:28:13.577654    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:28:13.591433    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:28:13.594347    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:13.604347    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:13.710422    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:13.730131    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:13.730407    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:13.730448    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:13.739474    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51199
	I0819 10:28:13.739816    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:13.740174    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:13.740190    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:13.740438    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:13.740564    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:13.740661    4789 start.go:317] joinCluster: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:28:13.740750    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 10:28:13.740767    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:13.740857    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:13.740939    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:13.741027    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:13.741101    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:13.815525    4789 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:13.815563    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0819 10:28:41.108330    4789 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (27.292143754s)
	I0819 10:28:41.108351    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 10:28:41.504714    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000-m02 minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=false
	I0819 10:28:41.585348    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-431000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 10:28:41.693283    4789 start.go:319] duration metric: took 27.951997328s to joinCluster
	I0819 10:28:41.693326    4789 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:41.693537    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:41.715528    4789 out.go:177] * Verifying Kubernetes components...
	I0819 10:28:41.790354    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:41.995139    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:42.017369    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:28:42.017608    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:28:42.017650    4789 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:28:42.017827    4789 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:42.017919    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.017925    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.017930    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.017935    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.025432    4789 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 10:28:42.518902    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.518917    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.518923    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.518927    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.521742    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:43.018396    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.018411    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.018417    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.018421    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.021454    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:43.518031    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.518083    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.518106    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.518116    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.522999    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:44.018193    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.018219    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.018231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.018237    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.021854    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:44.022387    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:44.518152    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.518189    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.518196    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.518199    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.520027    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.019772    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.019792    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.019799    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.019803    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.021628    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.518039    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.518053    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.518059    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.518064    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.520113    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:46.018198    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.018232    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.018239    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.018243    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.020136    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.518474    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.518490    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.518496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.518499    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.520505    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.520916    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:47.019124    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.019150    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.019162    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.022729    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:47.518316    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.518341    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.518351    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.518356    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.520471    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:48.019594    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.019620    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.019630    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.019636    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.023447    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:48.518492    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.518526    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.518583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.518593    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.523421    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:48.523787    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:49.019217    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.019242    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.019254    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.019260    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.022862    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:49.520299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.520324    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.520337    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.520342    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.523532    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.019383    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.019412    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.019424    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.019430    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.022847    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.519489    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.519503    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.519511    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.519515    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.522131    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:51.019130    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.019153    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.019163    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.022497    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:51.022894    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:51.518391    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.518448    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.518465    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.518476    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.521848    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.019014    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.019045    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.019103    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.019117    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.022339    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.519630    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.519644    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.519651    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.519655    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.522019    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.018435    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.018460    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.018472    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.018480    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.021850    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:53.518299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.518340    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.518349    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.518355    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.520795    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.521268    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:54.020380    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.020406    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.020418    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.020423    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.024178    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:54.519346    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.519364    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.519383    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.519387    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.521155    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:55.020400    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.020425    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.020437    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.020444    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.024326    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:55.519229    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.519245    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.519264    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.519268    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.521435    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:55.521852    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:56.019678    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.019703    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.019714    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.019719    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.023317    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:56.518539    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.518563    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.518576    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.518581    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.521781    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.020424    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.020449    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.020460    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.020465    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.024114    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.519399    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.519428    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.519468    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.519475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.522788    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.523223    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:58.018734    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.018759    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.018770    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.018777    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.022242    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.518348    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.518359    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.518371    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.518375    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.522907    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.523168    4789 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:28:58.523182    4789 node_ready.go:38] duration metric: took 16.504973252s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:58.523189    4789 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:28:58.523237    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:28:58.523243    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.523249    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.523253    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.528083    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.532699    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.532761    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:28:58.532768    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.532774    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.532776    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.535978    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.536344    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.536351    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.536358    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.536361    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.538061    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.538368    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.538377    4789 pod_ready.go:82] duration metric: took 5.660556ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538383    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538413    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:28:58.538417    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.538423    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.538428    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.540013    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.540457    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.540465    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.540471    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.540475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.542120    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.542393    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.542400    4789 pod_ready.go:82] duration metric: took 4.011453ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542406    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542439    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:28:58.542444    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.542449    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.542454    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.543986    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.544340    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.544347    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.544353    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.544356    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.545868    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.546173    4789 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.546181    4789 pod_ready.go:82] duration metric: took 3.769725ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546187    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546221    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:28:58.546226    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.546231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.546234    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.547638    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.548110    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.548118    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.548123    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.548127    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.549514    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.549853    4789 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.549860    4789 pod_ready.go:82] duration metric: took 3.668598ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.549868    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.718822    4789 request.go:632] Waited for 168.888912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718861    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718867    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.718872    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.718876    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.721032    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:58.919673    4789 request.go:632] Waited for 198.011193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919731    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919740    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.919750    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.919807    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.923236    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.923670    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.923682    4789 pod_ready.go:82] duration metric: took 373.799986ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.923691    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.119399    4789 request.go:632] Waited for 195.629207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119559    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119572    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.119583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.119589    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.122804    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.318619    4789 request.go:632] Waited for 195.030736ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318674    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318695    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.318702    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.318705    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.320812    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.321165    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.321173    4789 pod_ready.go:82] duration metric: took 397.466691ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.321180    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.520541    4789 request.go:632] Waited for 199.292765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520642    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520652    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.520663    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.520672    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.524463    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.718728    4789 request.go:632] Waited for 192.615056ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718803    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718811    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.718818    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.718823    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.720955    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.721397    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.721407    4789 pod_ready.go:82] duration metric: took 400.213219ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.721415    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.918907    4789 request.go:632] Waited for 197.434904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919004    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919014    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.919024    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.919030    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.922451    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.119192    4789 request.go:632] Waited for 196.220574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119263    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119272    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.119286    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.119297    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.122630    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.122957    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.122968    4789 pod_ready.go:82] duration metric: took 401.538458ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.122977    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.320524    4789 request.go:632] Waited for 197.475989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320660    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320672    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.320681    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.320689    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.323985    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.519403    4789 request.go:632] Waited for 194.628597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519535    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519546    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.519560    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.519568    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.523121    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.523435    4789 pod_ready.go:93] pod "kube-proxy-5h7j2" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.523449    4789 pod_ready.go:82] duration metric: took 400.456993ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.523457    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.718666    4789 request.go:632] Waited for 195.15054ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718742    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718752    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.718786    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.718800    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.721920    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.918782    4789 request.go:632] Waited for 196.40919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918873    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918882    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.918896    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.918906    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.922355    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.922815    4789 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.922824    4789 pod_ready.go:82] duration metric: took 399.351509ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.922830    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.118854    4789 request.go:632] Waited for 195.977175ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118950    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118965    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.118981    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.118987    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.122683    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.318886    4789 request.go:632] Waited for 195.887859ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319029    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319042    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.319053    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.319063    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.322689    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.323187    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.323200    4789 pod_ready.go:82] duration metric: took 400.355182ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.323208    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.518928    4789 request.go:632] Waited for 195.662505ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519043    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519057    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.519070    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.519077    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.522736    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.718819    4789 request.go:632] Waited for 195.65197ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718885    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718891    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.718899    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.718905    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.721246    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:29:01.721682    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.721691    4789 pod_ready.go:82] duration metric: took 398.467113ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.721701    4789 pod_ready.go:39] duration metric: took 3.198431164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:29:01.721718    4789 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:29:01.721774    4789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:29:01.735634    4789 api_server.go:72] duration metric: took 20.041851081s to wait for apiserver process to appear ...
	I0819 10:29:01.735647    4789 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:29:01.735663    4789 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:29:01.738815    4789 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:29:01.738848    4789 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:29:01.738854    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.738860    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.738864    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.739526    4789 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:29:01.739580    4789 api_server.go:141] control plane version: v1.31.0
	I0819 10:29:01.739589    4789 api_server.go:131] duration metric: took 3.937962ms to wait for apiserver health ...
	I0819 10:29:01.739594    4789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:29:01.918638    4789 request.go:632] Waited for 178.995687ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918733    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918745    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.918757    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.918762    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.922864    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:01.926606    4789 system_pods.go:59] 17 kube-system pods found
	I0819 10:29:01.926628    4789 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:01.926633    4789 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:01.926636    4789 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:01.926640    4789 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:01.926642    4789 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:01.926645    4789 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:01.926647    4789 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:01.926650    4789 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:01.926653    4789 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:01.926656    4789 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:01.926659    4789 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:01.926666    4789 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:01.926669    4789 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:01.926672    4789 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:01.926674    4789 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:01.926677    4789 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:01.926680    4789 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:01.926684    4789 system_pods.go:74] duration metric: took 187.080965ms to wait for pod list to return data ...
	I0819 10:29:01.926689    4789 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:29:02.119406    4789 request.go:632] Waited for 192.625822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119507    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119517    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.119528    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.119535    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.123120    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.123283    4789 default_sa.go:45] found service account: "default"
	I0819 10:29:02.123293    4789 default_sa.go:55] duration metric: took 196.595366ms for default service account to be created ...
	I0819 10:29:02.123300    4789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:29:02.319795    4789 request.go:632] Waited for 196.43255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319928    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319939    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.319947    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.319954    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.324586    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:02.328058    4789 system_pods.go:86] 17 kube-system pods found
	I0819 10:29:02.328071    4789 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:02.328075    4789 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:02.328078    4789 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:02.328083    4789 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:02.328086    4789 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:02.328088    4789 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:02.328091    4789 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:02.328093    4789 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:02.328096    4789 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:02.328098    4789 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:02.328101    4789 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:02.328103    4789 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:02.328106    4789 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:02.328109    4789 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:02.328112    4789 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:02.328115    4789 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:02.328117    4789 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:02.328122    4789 system_pods.go:126] duration metric: took 204.813151ms to wait for k8s-apps to be running ...
	I0819 10:29:02.328133    4789 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:29:02.328183    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:29:02.340002    4789 system_svc.go:56] duration metric: took 11.865981ms WaitForService to wait for kubelet
	I0819 10:29:02.340017    4789 kubeadm.go:582] duration metric: took 20.646222268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:29:02.340034    4789 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:29:02.518831    4789 request.go:632] Waited for 178.726274ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518969    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518980    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.518991    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.518998    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.522659    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.523326    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523339    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523348    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523351    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523354    4789 node_conditions.go:105] duration metric: took 183.311856ms to run NodePressure ...
	I0819 10:29:02.523361    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:29:02.523378    4789 start.go:255] writing updated cluster config ...
	I0819 10:29:02.544110    4789 out.go:201] 
	I0819 10:29:02.566227    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:02.566358    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.588965    4789 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:29:02.630777    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:29:02.630803    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:29:02.630953    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:29:02.630966    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:29:02.631053    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.631767    4789 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:29:02.631849    4789 start.go:364] duration metric: took 64.609µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:29:02.631869    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:29:02.631978    4789 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0819 10:29:02.652968    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:29:02.653116    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:29:02.653158    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:29:02.663539    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51204
	I0819 10:29:02.663925    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:29:02.664263    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:29:02.664277    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:29:02.664539    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:29:02.664672    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:02.664758    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:02.664867    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:29:02.664899    4789 client.go:168] LocalClient.Create starting
	I0819 10:29:02.664932    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:29:02.664992    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665005    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665051    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:29:02.665087    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665103    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665116    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:29:02.665122    4789 main.go:141] libmachine: (ha-431000-m03) Calling .PreCreateCheck
	I0819 10:29:02.665218    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.665228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:02.674109    4789 main.go:141] libmachine: Creating machine...
	I0819 10:29:02.674126    4789 main.go:141] libmachine: (ha-431000-m03) Calling .Create
	I0819 10:29:02.674302    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.674550    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.674293    4918 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:29:02.674675    4789 main.go:141] libmachine: (ha-431000-m03) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:29:02.956098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.955977    4918 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa...
	I0819 10:29:03.041212    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.041121    4918 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk...
	I0819 10:29:03.041230    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing magic tar header
	I0819 10:29:03.041239    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing SSH key tar header
	I0819 10:29:03.042098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.042003    4918 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03 ...
	I0819 10:29:03.582755    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.582783    4789 main.go:141] libmachine: (ha-431000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:29:03.582846    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:29:03.618942    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:29:03.618960    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:29:03.619021    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619049    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619085    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:29:03.619116    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:29:03.619133    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:29:03.621990    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Pid is 4921
	I0819 10:29:03.622461    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:29:03.622497    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.622585    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:03.623424    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:03.623486    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:03.623500    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:03.623537    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:03.623548    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:03.623558    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:03.623568    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:03.629643    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:29:03.638725    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:29:03.639577    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:03.639599    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:03.639609    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:03.639622    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.022361    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:29:04.022375    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:29:04.137228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:04.137262    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:04.137274    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:04.137284    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.138001    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:29:04.138016    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:29:05.623879    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 1
	I0819 10:29:05.623896    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:05.624023    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:05.624809    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:05.624873    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:05.624888    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:05.624904    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:05.624917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:05.624926    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:05.624935    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:07.626679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 2
	I0819 10:29:07.626696    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:07.626779    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:07.627539    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:07.627582    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:07.627592    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:07.627610    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:07.627619    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:07.627626    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:07.627635    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.627812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 3
	I0819 10:29:09.627828    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:09.627917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:09.628679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:09.628746    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:09.628777    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:09.628791    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:09.628799    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:09.628806    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:09.628812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.722721    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:29:09.722792    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:29:09.722802    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:29:09.745848    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:29:11.630390    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 4
	I0819 10:29:11.630407    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:11.630495    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:11.631275    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:11.631321    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:11.631331    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:11.631340    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:11.631359    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:11.631366    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:11.631387    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:13.633236    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 5
	I0819 10:29:13.633251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.633339    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.634147    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:13.634209    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0819 10:29:13.634221    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:29:13.634228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:29:13.634232    4789 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:29:13.634299    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:13.634943    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635064    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635157    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:29:13.635165    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:29:13.635251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.635310    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.636120    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:29:13.636129    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:29:13.636133    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:29:13.636138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:13.636228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:13.636309    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636392    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636477    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:13.636587    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:13.636755    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:13.636763    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:29:14.697546    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.697558    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:29:14.697564    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.697702    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.697798    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.697887    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.698009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.698168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.698318    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.698326    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:29:14.765778    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:29:14.765827    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:29:14.765833    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:29:14.765839    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.765977    4789 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:29:14.765988    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.766081    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.766185    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.766270    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766369    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766481    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.766635    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.766783    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.766792    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:29:14.841753    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:29:14.841769    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.841901    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.842009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842101    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842195    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.842324    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.842477    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.842489    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:29:14.911764    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.911779    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:29:14.911793    4789 buildroot.go:174] setting up certificates
	I0819 10:29:14.911800    4789 provision.go:84] configureAuth start
	I0819 10:29:14.911807    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.911942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:14.912037    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.912110    4789 provision.go:143] copyHostCerts
	I0819 10:29:14.912141    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912187    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:29:14.912193    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912326    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:29:14.912504    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912534    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:29:14.912539    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912651    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:29:14.912808    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912854    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:29:14.912859    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912935    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:29:14.913083    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:29:15.064390    4789 provision.go:177] copyRemoteCerts
	I0819 10:29:15.064440    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:29:15.064455    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.064599    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.064695    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.064786    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.064886    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:15.103656    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:29:15.103727    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:29:15.123430    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:29:15.123497    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:29:15.143265    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:29:15.143333    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:29:15.162885    4789 provision.go:87] duration metric: took 251.064942ms to configureAuth
	I0819 10:29:15.162900    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:29:15.163052    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:15.163065    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:15.163221    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.163329    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.163417    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163506    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.163693    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.163824    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.163831    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:29:15.225270    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:29:15.225286    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:29:15.225356    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:29:15.225368    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.225510    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.225619    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225708    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225810    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.225948    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.226090    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.226134    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:29:15.299640    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:29:15.299658    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.299797    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.299889    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.299978    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.300067    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.300202    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.300355    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.300368    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:29:16.819930    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:29:16.819945    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:29:16.819953    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetURL
	I0819 10:29:16.820095    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:29:16.820107    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:29:16.820113    4789 client.go:171] duration metric: took 14.154897138s to LocalClient.Create
	I0819 10:29:16.820124    4789 start.go:167] duration metric: took 14.154947877s to libmachine.API.Create "ha-431000"
	I0819 10:29:16.820129    4789 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:29:16.820136    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:29:16.820145    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.820288    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:29:16.820301    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.820396    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.820494    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.820582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.820664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:16.862693    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:29:16.866416    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:29:16.866431    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:29:16.866540    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:29:16.866725    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:29:16.866732    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:29:16.866944    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:29:16.874578    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:29:16.904910    4789 start.go:296] duration metric: took 84.771069ms for postStartSetup
	I0819 10:29:16.904942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:16.905569    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.905740    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:16.906122    4789 start.go:128] duration metric: took 14.273822612s to createHost
	I0819 10:29:16.906138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.906230    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.906303    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906387    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906475    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.906573    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:16.906690    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:16.906697    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:29:16.969389    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088556.958185685
	
	I0819 10:29:16.969401    4789 fix.go:216] guest clock: 1724088556.958185685
	I0819 10:29:16.969406    4789 fix.go:229] Guest: 2024-08-19 10:29:16.958185685 -0700 PDT Remote: 2024-08-19 10:29:16.906131 -0700 PDT m=+127.499217490 (delta=52.054685ms)
	I0819 10:29:16.969416    4789 fix.go:200] guest clock delta is within tolerance: 52.054685ms
	I0819 10:29:16.969419    4789 start.go:83] releasing machines lock for "ha-431000-m03", held for 14.337247496s
	I0819 10:29:16.969437    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.969573    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.992258    4789 out.go:177] * Found network options:
	I0819 10:29:17.014265    4789 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:29:17.037508    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.037542    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.037561    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038432    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038682    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038835    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:29:17.038873    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:29:17.038922    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.038957    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.039067    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:29:17.039087    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:17.039116    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039298    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039332    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039497    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039590    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039679    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039721    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:17.039809    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:29:17.074320    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:29:17.074385    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:29:17.120302    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:29:17.120318    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.120398    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.135851    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:29:17.144402    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:29:17.152735    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.152784    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:29:17.161185    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.169599    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:29:17.177908    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.186319    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:29:17.194967    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:29:17.203702    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:29:17.212228    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:29:17.220632    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:29:17.228164    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:29:17.235717    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.329551    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:29:17.348829    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.348909    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:29:17.363903    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.374976    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:29:17.393061    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.404238    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.414728    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:29:17.438632    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.449143    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.464536    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:29:17.467445    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:29:17.474809    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:29:17.488421    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:29:17.581504    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:29:17.684960    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.684980    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:29:17.699658    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.803979    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:30:18.773891    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.968555005s)
	I0819 10:30:18.774012    4789 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:30:18.808676    4789 out.go:201] 
	W0819 10:30:18.829152    4789 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:29:15 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570013158Z" level=info msg="Starting up"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570447745Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.572542412Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.584880924Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603137975Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603181724Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603219390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603233227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603303033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603338653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603471354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603509282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603521199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603528665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603591360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603811486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605351283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605389063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605504861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605538594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605610859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605677674Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607907354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607976584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607991948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608010711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608023403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608093276Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608724366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608874333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608913351Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608929178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608943960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608968346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609006571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609021660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609032833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609044499Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609055485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609066063Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609088279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609103865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609115537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609130257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609139734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609151164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609161605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609173829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609185591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609200246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609211000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609224200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609237871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609251525Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609296616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609316285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609327369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609362155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609478815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609512436Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609530768Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609541857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609553085Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609563545Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610591556Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610680787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610769049Z" level=info msg="containerd successfully booted in 0.026402s"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.601341697Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.606766805Z" level=info msg="Loading containers: start."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.688780306Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.769433920Z" level=info msg="Loading containers: done."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776749571Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776865122Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.804822251Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.805010917Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:29:16 ha-431000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.814047535Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:29:17 ha-431000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815466623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815881336Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815956644Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.816022765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:18 ha-431000-m03 dockerd[921]: time="2024-08-19T17:29:18.853267859Z" level=info msg="Starting up"
	Aug 19 17:30:18 ha-431000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:30:18.829235    4789 out.go:270] * 
	W0819 10:30:18.830413    4789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:30:18.888275    4789 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:28:07 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3745c7f8fb9ffda1a9528dbab0743afd132acd46a2634643d4b5a24035dc2e4/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/868ee98671e833d733f787480bd37f293c8c6eb8b4092a75c7b96c7993f5f451/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/74fd2f09b011aa0f318ae4259efd3f3d52dc61d0bd78f032481d1a46763eeaae/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.132794637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133043856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133186443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133435141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139175494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139344496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139355701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139421519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157876304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157962624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157975535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.158198941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621287999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621447365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621465217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621560978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6d38fc70c811c9647892071fd07ef2e6455806b20e204cd6583df80c81ba64b7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 19 17:30:23 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040175789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040258993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040272849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040810082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da6e4a61b6cf8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   6d38fc70c811c       busybox-7dff88458-x7m6m
	b9d1bccf00c94       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   74fd2f09b011a       coredns-6f6b679f8f-hr2qx
	e7cacf032435f       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   868ee98671e83       storage-provisioner
	a3891ab602da5       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   c3745c7f8fb9f       coredns-6f6b679f8f-vc76p
	37cd2e9ed2f34       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              14 minutes ago      Running             kindnet-cni               0                   568b6f1ff9aaf       kindnet-lvdbg
	889ab608901bb       ad83b2ca7b09e                                                                                         14 minutes ago      Running             kube-proxy                0                   fde7b27c3d1a5       kube-proxy-5l56s
	ed733554ed160       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     14 minutes ago      Running             kube-vip                  0                   90ec229d87c2c       kube-vip-ha-431000
	11d9cd3b2f49f       1766f54c897f0                                                                                         14 minutes ago      Running             kube-scheduler            0                   4c252909f338f       kube-scheduler-ha-431000
	262471364c991       604f5db92eaa8                                                                                         14 minutes ago      Running             kube-apiserver            0                   5a0fe916eaf1d       kube-apiserver-ha-431000
	39fe08877284d       2e96e5913fc06                                                                                         14 minutes ago      Running             etcd                      0                   fc30d54d1b565       etcd-ha-431000
	2801f8f44773b       045733566833c                                                                                         14 minutes ago      Running             kube-controller-manager   0                   80d21805f230b       kube-controller-manager-ha-431000
	
	
	==> coredns [a3891ab602da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40841 - 35632 "HINFO IN 8043641794425982319.4992720317295253252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008506209s
	[INFO] 10.244.1.2:51889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132717s
	[INFO] 10.244.1.2:37985 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001601417s
	[INFO] 10.244.1.2:55682 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.007910651s
	[INFO] 10.244.0.4:38616 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.000569215s
	[INFO] 10.244.0.4:47772 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000054313s
	[INFO] 10.244.1.2:49768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135774s
	[INFO] 10.244.1.2:55729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00095124s
	[INFO] 10.244.1.2:38602 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000089444s
	[INFO] 10.244.1.2:52875 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099022s
	[INFO] 10.244.1.2:49308 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063848s
	[INFO] 10.244.0.4:57863 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000064923s
	[INFO] 10.244.0.4:40409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096347s
	[INFO] 10.244.1.2:34617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084305s
	[INFO] 10.244.1.2:55843 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058734s
	[INFO] 10.244.0.4:43213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096675s
	[INFO] 10.244.0.4:44050 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031036s
	[INFO] 10.244.1.2:49077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105574s
	[INFO] 10.244.1.2:57560 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000084227s
	[INFO] 10.244.1.2:40959 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135434s
	
	
	==> coredns [b9d1bccf00c9] <==
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54195 - 29045 "HINFO IN 6513715404119561949.1799819676960271336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.007921235s
	[INFO] 10.244.1.2:45210 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.055498798s
	[INFO] 10.244.0.4:53730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111076s
	[INFO] 10.244.0.4:51704 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000411643s
	[INFO] 10.244.1.2:54559 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088744s
	[INFO] 10.244.1.2:58642 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064137s
	[INFO] 10.244.1.2:34281 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000845538s
	[INFO] 10.244.0.4:53439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000058375s
	[INFO] 10.244.0.4:33951 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106207s
	[INFO] 10.244.0.4:38202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000034691s
	[INFO] 10.244.0.4:46478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119286s
	[INFO] 10.244.0.4:53704 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053613s
	[INFO] 10.244.0.4:42766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051163s
	[INFO] 10.244.1.2:44413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116167s
	[INFO] 10.244.1.2:58453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067066s
	[INFO] 10.244.0.4:37472 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063597s
	[INFO] 10.244.0.4:59559 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033396s
	[INFO] 10.244.1.2:59906 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000120736s
	[INFO] 10.244.0.4:47175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120659s
	[INFO] 10.244.0.4:56722 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121072s
	[INFO] 10.244.0.4:43652 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174608s
	[INFO] 10.244.0.4:32818 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00017028s
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:27:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:42:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:28:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7b5b85e2c64405f969f3e24eb671b2e
	  System UUID:                7f844fbb-0000-0000-b5d6-699bdfe1640c
	  Boot ID:                    cb211998-dc9c-4fd5-a169-3f6eeb2403fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x7m6m              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-hr2qx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-vc76p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-431000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-lvdbg                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-431000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-431000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-5l56s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-431000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-431000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-431000 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	
	
	Name:               ha-431000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:28:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:41:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-431000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 21fb6f298fbf435c88fd6e9f9b50e04f
	  System UUID:                decf4e23-0000-0000-95db-084dbcc69753
	  Boot ID:                    330a7904-5229-4d07-9792-de118102386c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2l9lq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-431000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-qmgqd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-431000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-431000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-5h7j2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-431000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-431000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	
	
	==> dmesg <==
	[  +2.712596] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.230971] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.519395] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.106046] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.754357] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.260100] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.108326] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.116397] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.050322] kauditd_printk_skb: 139 callbacks suppressed
	[  +2.370658] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.100232] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.114416] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.133019] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.706453] systemd-fstab-generator[1261]: Ignoring "noauto" option for root device
	[  +0.055873] kauditd_printk_skb: 136 callbacks suppressed
	[  +2.542020] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +4.524199] systemd-fstab-generator[1651]: Ignoring "noauto" option for root device
	[  +0.058523] kauditd_printk_skb: 70 callbacks suppressed
	[  +7.145787] systemd-fstab-generator[2146]: Ignoring "noauto" option for root device
	[  +0.090131] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.001426] kauditd_printk_skb: 35 callbacks suppressed
	[Aug19 17:28] kauditd_printk_skb: 15 callbacks suppressed
	[ +36.695422] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [39fe08877284] <==
	{"level":"info","ts":"2024-08-19T17:28:39.576807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860) learners=(13991592590719088728)"}
	{"level":"info","ts":"2024-08-19T17:28:39.576958Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"c22c1f54a3cc7858","added-peer-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-19T17:28:39.577171Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577230Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577486Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577607Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577632Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858","remote-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-19T17:28:39.577678Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577764Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577976Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.578023Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.582369Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T17:28:40.582407Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.582418Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.596476Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.597370Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T17:28:40.597585Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.605913Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:41.107824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860 13991592590719088728)"}
	{"level":"info","ts":"2024-08-19T17:28:41.107895Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-19T17:28:41.107911Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:32:31.484329Z","caller":"traceutil/trace.go:171","msg":"trace[1768622606] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"105.97642ms","start":"2024-08-19T17:32:31.378330Z","end":"2024-08-19T17:32:31.484306Z","steps":["trace[1768622606] 'process raft request'  (duration: 69.010204ms)","trace[1768622606] 'compare'  (duration: 36.887791ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:37:40.726136Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1233}
	{"level":"info","ts":"2024-08-19T17:37:40.747676Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1233,"took":"20.998439ms","hash":1199177849,"current-db-size-bytes":3051520,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-19T17:37:40.747929Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1199177849,"revision":1233,"compact-revision":-1}
	
	
	==> kernel <==
	 17:42:05 up 14 min,  0 users,  load average: 0.04, 0.13, 0.09
	Linux ha-431000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [37cd2e9ed2f3] <==
	I0819 17:41:03.913704       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:41:13.921879       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:41:13.922055       1 main.go:299] handling current node
	I0819 17:41:13.922135       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:41:13.922216       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:41:23.922941       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:41:23.923249       1 main.go:299] handling current node
	I0819 17:41:23.923348       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:41:23.923383       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:41:33.918589       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:41:33.918730       1 main.go:299] handling current node
	I0819 17:41:33.918774       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:41:33.918810       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:41:43.921725       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:41:43.921764       1 main.go:299] handling current node
	I0819 17:41:43.921776       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:41:43.921781       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:41:53.913960       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:41:53.914062       1 main.go:299] handling current node
	I0819 17:41:53.914082       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:41:53.914091       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:03.913678       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:42:03.913720       1 main.go:299] handling current node
	I0819 17:42:03.913732       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:42:03.913737       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [262471364c99] <==
	I0819 17:27:42.843862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0819 17:27:42.851035       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0819 17:27:42.851176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 17:27:43.131229       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 17:27:43.156609       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 17:27:43.228677       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 17:27:43.232630       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0819 17:27:43.233263       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:27:43.235521       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 17:27:43.816793       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 17:27:45.642805       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 17:27:45.648554       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 17:27:45.656204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 17:27:49.372173       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 17:27:49.521616       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0819 17:41:58.471372       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51273: use of closed network connection
	E0819 17:41:58.792809       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51278: use of closed network connection
	E0819 17:41:58.976708       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51280: use of closed network connection
	E0819 17:41:59.288867       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51285: use of closed network connection
	E0819 17:41:59.474614       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51287: use of closed network connection
	E0819 17:41:59.785950       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51292: use of closed network connection
	E0819 17:42:02.821757       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51320: use of closed network connection
	E0819 17:42:03.005704       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51322: use of closed network connection
	E0819 17:42:03.316458       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51327: use of closed network connection
	E0819 17:42:03.527436       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51329: use of closed network connection
	
	
	==> kube-controller-manager [2801f8f44773] <==
	I0819 17:28:46.812463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:46.910622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:49.488441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:58.619481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:28:58.630217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:29:01.828992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:29:09.962018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:30:22.272615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.179354ms"
	I0819 17:30:22.288765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.8458ms"
	I0819 17:30:22.344803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="55.991929ms"
	I0819 17:30:22.374182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.188136ms"
	I0819 17:30:22.381153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.695075ms"
	I0819 17:30:22.382352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.585µs"
	I0819 17:30:22.399951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.69495ms"
	I0819 17:30:22.400210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="24.929µs"
	I0819 17:30:24.244155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.898617ms"
	I0819 17:30:24.244396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.117µs"
	I0819 17:30:24.566063       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.881458ms"
	I0819 17:30:24.566244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.693µs"
	I0819 17:30:41.556624       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:30:49.673928       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000"
	I0819 17:35:47.271228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:35:55.416754       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000"
	I0819 17:40:53.216070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:41:01.735584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000"
	
	
	==> kube-proxy [889ab608901b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:27:50.162614       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:27:50.171417       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0819 17:27:50.171450       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:27:50.239161       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:27:50.239202       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:27:50.239220       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:27:50.242102       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:27:50.242306       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:27:50.242335       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:27:50.253458       1 config.go:197] "Starting service config controller"
	I0819 17:27:50.253497       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:27:50.253518       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:27:50.253542       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:27:50.253889       1 config.go:326] "Starting node config controller"
	I0819 17:27:50.253915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:27:50.354735       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:27:50.354788       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:27:50.354817       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11d9cd3b2f49] <==
	W0819 17:27:41.846154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 17:27:41.846286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:41.846418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:27:41.846569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.722533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 17:27:42.722591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.808762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 17:27:42.808891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.853276       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:27:42.853353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.858509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:27:42.858619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.867998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:27:42.868077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.900445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:27:42.900541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.970545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:27:42.970765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:43.004003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:27:43.004103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:27:43.339820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:30:22.272037       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	E0819 17:30:22.273195       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e37fe27d-f1bf-427d-a76d-96722b0c74a1(default/busybox-7dff88458-x7m6m) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x7m6m"
	E0819 17:30:22.273433       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" pod="default/busybox-7dff88458-x7m6m"
	I0819 17:30:22.273582       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	
	
	==> kubelet <==
	Aug 19 17:37:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:37:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:37:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:37:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:38:45 ha-431000 kubelet[2153]: E0819 17:38:45.527347    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:38:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:39:45 ha-431000 kubelet[2153]: E0819 17:39:45.526214    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:39:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:40:45 ha-431000 kubelet[2153]: E0819 17:40:45.529172    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:40:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:41:45 ha-431000 kubelet[2153]: E0819 17:41:45.526920    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:41:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:41:59 ha-431000 kubelet[2153]: E0819 17:41:59.290192    2153 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49834->127.0.0.1:35619: write tcp 127.0.0.1:49834->127.0.0.1:35619: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-wfcpq
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-431000 describe pod busybox-7dff88458-wfcpq
helpers_test.go:282: (dbg) kubectl --context ha-431000 describe pod busybox-7dff88458-wfcpq:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-wfcpq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t489x (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t489x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  11m                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  82s (x2 over 6m22s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  80s (x3 over 11m)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (3.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-431000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-431000 -v=7 --alsologtostderr: (50.614732074s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 2 (444.809313ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:42:56.855289    6222 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:42:56.855841    6222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:42:56.855846    6222 out.go:358] Setting ErrFile to fd 2...
	I0819 10:42:56.855850    6222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:42:56.856028    6222 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:42:56.856216    6222 out.go:352] Setting JSON to false
	I0819 10:42:56.856241    6222 mustload.go:65] Loading cluster: ha-431000
	I0819 10:42:56.856277    6222 notify.go:220] Checking for updates...
	I0819 10:42:56.856575    6222 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:42:56.856591    6222 status.go:255] checking status of ha-431000 ...
	I0819 10:42:56.856955    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:56.857007    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:56.865945    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51401
	I0819 10:42:56.866355    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:56.866760    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:56.866770    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:56.866975    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:56.867080    6222 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:42:56.867160    6222 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:42:56.867239    6222 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:42:56.868197    6222 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:42:56.868219    6222 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:42:56.868455    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:56.868475    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:56.876867    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51404
	I0819 10:42:56.877169    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:56.877484    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:56.877503    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:56.877704    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:56.877809    6222 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:42:56.877884    6222 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:42:56.878130    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:56.878154    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:56.886432    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51406
	I0819 10:42:56.886732    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:56.887107    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:56.887129    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:56.887323    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:56.887431    6222 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:42:56.887579    6222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:42:56.887604    6222 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:42:56.887690    6222 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:42:56.887787    6222 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:42:56.887878    6222 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:42:56.887969    6222 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:42:56.923617    6222 ssh_runner.go:195] Run: systemctl --version
	I0819 10:42:56.928528    6222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:42:56.939576    6222 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:42:56.939605    6222 api_server.go:166] Checking apiserver status ...
	I0819 10:42:56.939647    6222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:42:56.950873    6222 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W0819 10:42:56.958430    6222 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:42:56.959162    6222 ssh_runner.go:195] Run: ls
	I0819 10:42:56.962658    6222 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:42:56.966855    6222 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:42:56.966869    6222 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:42:56.966878    6222 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:42:56.966890    6222 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:42:56.967152    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:56.967184    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:56.976039    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51410
	I0819 10:42:56.976367    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:56.976726    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:56.976741    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:56.976965    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:56.977088    6222 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:42:56.977174    6222 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:42:56.977255    6222 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:42:56.978237    6222 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0819 10:42:56.978248    6222 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:42:56.978483    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:56.978530    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:56.987334    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51412
	I0819 10:42:56.987718    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:56.988110    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:56.988123    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:56.988383    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:56.988499    6222 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:42:56.988594    6222 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:42:56.988887    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:56.988913    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:56.997820    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51414
	I0819 10:42:56.998177    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:56.998496    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:56.998507    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:56.998704    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:56.998813    6222 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:42:56.998933    6222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:42:56.998944    6222 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:42:56.999031    6222 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:42:56.999110    6222 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:42:56.999189    6222 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:42:56.999261    6222 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:42:57.044193    6222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:42:57.056566    6222 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:42:57.056580    6222 api_server.go:166] Checking apiserver status ...
	I0819 10:42:57.056616    6222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:42:57.068743    6222 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0819 10:42:57.076119    6222 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:42:57.076164    6222 ssh_runner.go:195] Run: ls
	I0819 10:42:57.079495    6222 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:42:57.082697    6222 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:42:57.082709    6222 status.go:422] ha-431000-m02 apiserver status = Running (err=<nil>)
	I0819 10:42:57.082718    6222 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:42:57.082738    6222 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:42:57.083016    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:57.083038    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:57.091762    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51418
	I0819 10:42:57.092095    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:57.092400    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:57.092409    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:57.092620    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:57.092734    6222 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:42:57.092823    6222 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:42:57.092903    6222 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:42:57.093863    6222 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:42:57.093871    6222 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:42:57.094121    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:57.094149    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:57.102714    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51420
	I0819 10:42:57.103045    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:57.103355    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:57.103366    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:57.103559    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:57.103661    6222 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:42:57.103744    6222 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:42:57.103996    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:57.104033    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:57.112786    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51422
	I0819 10:42:57.113110    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:57.113437    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:57.113447    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:57.113694    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:57.113810    6222 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:42:57.113942    6222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:42:57.113960    6222 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:42:57.114050    6222 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:42:57.114153    6222 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:42:57.114282    6222 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:42:57.114363    6222 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:42:57.149173    6222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:42:57.160860    6222 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:42:57.160874    6222 api_server.go:166] Checking apiserver status ...
	I0819 10:42:57.160912    6222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:42:57.171397    6222 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:42:57.171406    6222 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:42:57.171416    6222 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:42:57.171425    6222 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:42:57.171714    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:57.171736    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:57.180414    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51425
	I0819 10:42:57.180749    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:57.181065    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:57.181078    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:57.181290    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:57.181398    6222 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:42:57.181480    6222 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:42:57.181556    6222 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:42:57.182517    6222 status.go:330] ha-431000-m04 host status = "Running" (err=<nil>)
	I0819 10:42:57.182527    6222 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:42:57.182769    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:57.182789    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:57.191645    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51427
	I0819 10:42:57.191981    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:57.192326    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:57.192344    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:57.192552    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:57.192650    6222 main.go:141] libmachine: (ha-431000-m04) Calling .GetIP
	I0819 10:42:57.192725    6222 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:42:57.192963    6222 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:42:57.192985    6222 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:42:57.201474    6222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51429
	I0819 10:42:57.201807    6222 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:42:57.202146    6222 main.go:141] libmachine: Using API Version  1
	I0819 10:42:57.202163    6222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:42:57.202402    6222 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:42:57.202524    6222 main.go:141] libmachine: (ha-431000-m04) Calling .DriverName
	I0819 10:42:57.202656    6222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:42:57.202668    6222 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHHostname
	I0819 10:42:57.202752    6222 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHPort
	I0819 10:42:57.202826    6222 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHKeyPath
	I0819 10:42:57.202925    6222 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHUsername
	I0819 10:42:57.202997    6222 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m04/id_rsa Username:docker}
	I0819 10:42:57.231804    6222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:42:57.242067    6222 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (2.29188222s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT |                     |
	|         | busybox-7dff88458-wfcpq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-431000 -v=7                | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:27:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:27:09.441458    4789 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:27:09.441716    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441721    4789 out.go:358] Setting ErrFile to fd 2...
	I0819 10:27:09.441725    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441914    4789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:27:09.443405    4789 out.go:352] Setting JSON to false
	I0819 10:27:09.468451    4789 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3399,"bootTime":1724085030,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:27:09.468547    4789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:27:09.554597    4789 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:27:09.577770    4789 notify.go:220] Checking for updates...
	I0819 10:27:09.609734    4789 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:27:09.676944    4789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:09.699980    4789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:27:09.722951    4789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:27:09.744804    4789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.765726    4789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:27:09.787204    4789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:27:09.817679    4789 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 10:27:09.859821    4789 start.go:297] selected driver: hyperkit
	I0819 10:27:09.859849    4789 start.go:901] validating driver "hyperkit" against <nil>
	I0819 10:27:09.859893    4789 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:27:09.864287    4789 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.864395    4789 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:27:09.872759    4789 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:27:09.876743    4789 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.876768    4789 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:27:09.876803    4789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:27:09.877011    4789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:27:09.877072    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:09.877082    4789 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 10:27:09.877094    4789 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:27:09.877164    4789 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:09.877251    4789 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.919755    4789 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:27:09.940604    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:09.940675    4789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:27:09.940720    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:09.940918    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:09.940931    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:09.941271    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:09.941299    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json: {Name:mkf9dcbb24d8b9fbe62d81f81a7a87fec457d2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:09.941835    4789 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:09.941963    4789 start.go:364] duration metric: took 95.166µs to acquireMachinesLock for "ha-431000"
	I0819 10:27:09.941997    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:09.942082    4789 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 10:27:09.963791    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:09.964075    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.964148    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:09.974068    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51111
	I0819 10:27:09.974512    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:09.974919    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:09.974932    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:09.975172    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:09.975283    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:09.975374    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:09.975471    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:09.975492    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:09.975527    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:09.975578    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975594    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975657    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:09.975695    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975707    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975719    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:09.975729    4789 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0819 10:27:09.975800    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.975970    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:09.976388    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:09.976397    4789 main.go:141] libmachine: (ha-431000) Calling .Create
	I0819 10:27:09.976462    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.976580    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:09.976459    4799 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.976633    4789 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:10.160305    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.160220    4799 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa...
	I0819 10:27:10.258779    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.258678    4799 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0819 10:27:10.258792    4789 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0819 10:27:10.258800    4789 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0819 10:27:10.259681    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.259588    4799 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000 ...
	I0819 10:27:10.634434    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.634476    4789 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:27:10.634529    4789 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:27:10.744945    4789 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:27:10.744966    4789 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:10.744993    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745030    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745065    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:10.745094    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:10.745118    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:10.748020    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Pid is 4802
	I0819 10:27:10.748404    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:27:10.748413    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.748494    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:10.749357    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:10.749398    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:10.749412    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:10.749423    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:10.749431    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:10.755634    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:10.806699    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:10.807300    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:10.807314    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:10.807322    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:10.807335    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.184562    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:11.184575    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:11.299194    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:11.299213    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:11.299228    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:11.299236    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.300075    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:11.300086    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:12.750038    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0819 10:27:12.750054    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:12.750189    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:12.750969    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:12.751019    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:12.751030    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:12.751039    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:12.751052    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:14.752158    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0819 10:27:14.752174    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:14.752264    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:14.753040    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:14.753090    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:14.753102    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:14.753111    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:14.753117    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.754325    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0819 10:27:16.754340    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:16.754402    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:16.755326    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:16.755347    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:16.755354    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:16.755373    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:16.755390    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.856153    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:27:16.856252    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:27:16.856262    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:27:16.880804    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:27:18.757489    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0819 10:27:18.757504    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:18.757601    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:18.758394    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:18.758435    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:18.758449    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:18.758481    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:18.758495    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:20.758927    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0819 10:27:20.758946    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.759035    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.759848    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:20.759873    4789 main.go:141] libmachine: (ha-431000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:20.759888    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:20.759901    4789 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:27:20.759913    4789 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:27:20.759952    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:20.760523    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760634    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760741    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:27:20.760753    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:20.760839    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.760885    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.761678    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:27:20.761690    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:27:20.761696    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:27:20.761702    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:20.761795    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:20.761883    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.761969    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.762060    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:20.762168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:20.762361    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:20.762369    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:27:21.818394    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.818406    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:27:21.818419    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.818554    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.818654    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818747    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818841    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.818981    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.819131    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.819139    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:27:21.870784    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:27:21.870826    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:27:21.870831    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:27:21.870837    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.870976    4789 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:27:21.870986    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.871077    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.871169    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.871272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871352    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871452    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.871577    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.871711    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.871719    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:27:21.937676    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:27:21.937694    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.937826    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.937927    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938017    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938112    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.938245    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.938391    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.938402    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:27:21.996654    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.996676    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:27:21.996692    4789 buildroot.go:174] setting up certificates
	I0819 10:27:21.996701    4789 provision.go:84] configureAuth start
	I0819 10:27:21.996714    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.996873    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:21.996990    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.997094    4789 provision.go:143] copyHostCerts
	I0819 10:27:21.997133    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997201    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:27:21.997209    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997337    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:27:21.997534    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997567    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:27:21.997572    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997714    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:27:21.997882    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.997926    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:27:21.997941    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.998049    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:27:21.998203    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:27:22.044837    4789 provision.go:177] copyRemoteCerts
	I0819 10:27:22.044896    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:27:22.044908    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.045021    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.045107    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.045191    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.045288    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:22.078701    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:27:22.078779    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:27:22.098027    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:27:22.098092    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 10:27:22.117169    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:27:22.117235    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:27:22.137411    4789 provision.go:87] duration metric: took 140.68689ms to configureAuth
	I0819 10:27:22.137424    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:27:22.137558    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:22.137574    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:22.137700    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.137783    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.137859    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.137942    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.138028    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.138134    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.138266    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.138274    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:27:22.191384    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:27:22.191397    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:27:22.191469    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:27:22.191481    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.191636    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.191724    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191834    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191924    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.192051    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.192193    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.192236    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:27:22.256138    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:27:22.256165    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.256301    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.256391    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256475    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256578    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.256695    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.256839    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.256851    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:27:23.816844    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:27:23.816860    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:27:23.816871    4789 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0819 10:27:23.817008    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:27:23.817016    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:27:23.817020    4789 client.go:171] duration metric: took 13.841219093s to LocalClient.Create
	I0819 10:27:23.817036    4789 start.go:167] duration metric: took 13.84126124s to libmachine.API.Create "ha-431000"
	I0819 10:27:23.817044    4789 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:27:23.817051    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:27:23.817063    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.817219    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:27:23.817232    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.817321    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.817402    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.817497    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.817595    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.852993    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:27:23.857771    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:27:23.857792    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:27:23.857909    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:27:23.858094    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:27:23.858100    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:27:23.858323    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:27:23.868639    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:23.894485    4789 start.go:296] duration metric: took 77.430316ms for postStartSetup
	I0819 10:27:23.894509    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:23.895099    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.895256    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:23.895585    4789 start.go:128] duration metric: took 13.953185373s to createHost
	I0819 10:27:23.895598    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.895691    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.895790    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895879    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895966    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.896069    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:23.896228    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:23.896236    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:27:23.956133    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088443.744394113
	
	I0819 10:27:23.956145    4789 fix.go:216] guest clock: 1724088443.744394113
	I0819 10:27:23.956151    4789 fix.go:229] Guest: 2024-08-19 10:27:23.744394113 -0700 PDT Remote: 2024-08-19 10:27:23.895593 -0700 PDT m=+14.491162031 (delta=-151.198887ms)
	I0819 10:27:23.956169    4789 fix.go:200] guest clock delta is within tolerance: -151.198887ms
	I0819 10:27:23.956173    4789 start.go:83] releasing machines lock for "ha-431000", held for 14.013893151s
	I0819 10:27:23.956192    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956322    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.956416    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956749    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956860    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956951    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:27:23.956980    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957023    4789 ssh_runner.go:195] Run: cat /version.json
	I0819 10:27:23.957036    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957073    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957109    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957170    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957184    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957292    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957350    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.957384    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:24.032926    4789 ssh_runner.go:195] Run: systemctl --version
	I0819 10:27:24.037723    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:27:24.041939    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:27:24.041985    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:27:24.055424    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:27:24.055435    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.055529    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.070257    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:27:24.079169    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:27:24.088264    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.088319    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:27:24.097172    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.105902    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:27:24.114585    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.123406    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:27:24.132626    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:27:24.141378    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:27:24.150490    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:27:24.158980    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:27:24.167068    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:27:24.175030    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.269460    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:27:24.289328    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.289405    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:27:24.304907    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.317291    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:27:24.330289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.340851    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.351456    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:27:24.376914    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.387402    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.402522    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:27:24.405426    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:27:24.412799    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:27:24.426019    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:27:24.528550    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:27:24.636829    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.636893    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:27:24.652027    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.753641    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:27.037286    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.283575266s)
	I0819 10:27:27.037346    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:27:27.047775    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:27:27.062961    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.074027    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:27:27.172330    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:27:27.284593    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.395779    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:27:27.409552    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.420868    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.532356    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:27:27.591558    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:27:27.591636    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:27:27.595967    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:27:27.596013    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:27:27.599275    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:27:27.625101    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:27:27.625173    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.642636    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.693299    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:27:27.693355    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:27.693783    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:27:27.698129    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:27.708916    4789 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:27:27.708982    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:27.709038    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:27.721971    4789 docker.go:685] Got preloaded images: 
	I0819 10:27:27.721984    4789 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0819 10:27:27.722034    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:27.730353    4789 ssh_runner.go:195] Run: which lz4
	I0819 10:27:27.733218    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 10:27:27.733323    4789 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 10:27:27.736425    4789 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 10:27:27.736445    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0819 10:27:28.750864    4789 docker.go:649] duration metric: took 1.017557348s to copy over tarball
	I0819 10:27:28.750956    4789 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 10:27:31.074672    4789 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.323648699s)
	I0819 10:27:31.074688    4789 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 10:27:31.100633    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:31.109680    4789 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0819 10:27:31.123335    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:31.234501    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:33.578614    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.344043512s)
	I0819 10:27:33.578701    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:33.592021    4789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 10:27:33.592040    4789 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:27:33.592048    4789 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:27:33.592132    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:27:33.592198    4789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:27:33.629283    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:33.629295    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:33.629309    4789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:27:33.629329    4789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:27:33.629424    4789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:27:33.629439    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:27:33.629491    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:27:33.642904    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:27:33.642969    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:27:33.643018    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:27:33.652008    4789 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:27:33.652070    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:27:33.660066    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:27:33.673571    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:27:33.686700    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:27:33.700085    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0819 10:27:33.713804    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:27:33.716661    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:33.726684    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:33.822205    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:27:33.836833    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:27:33.836844    4789 certs.go:194] generating shared ca certs ...
	I0819 10:27:33.836855    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.837051    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:27:33.837132    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:27:33.837142    4789 certs.go:256] generating profile certs ...
	I0819 10:27:33.837189    4789 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:27:33.837203    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0819 10:27:33.888319    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt ...
	I0819 10:27:33.888333    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt: {Name:mk2ecc34873277fbe11bf267ec0d97684e18e84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888666    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key ...
	I0819 10:27:33.888675    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key: {Name:mk51abee214c838f4621902241303fe73ba93aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888900    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e
	I0819 10:27:33.888915    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0819 10:27:34.060027    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e ...
	I0819 10:27:34.060046    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e: {Name:mk108eb9cf88ab2aae15883e4a3724751adb3118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060347    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e ...
	I0819 10:27:34.060356    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e: {Name:mk8fae11cce9c9a45d3e151953d1ee9ab2cc82d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060557    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:27:34.060759    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:27:34.060929    4789 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:27:34.060943    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0819 10:27:34.243675    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0819 10:27:34.243690    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mkeb1eac7ee8b3901067565b7ff883710f2d1088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244061    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key ...
	I0819 10:27:34.244069    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key: {Name:mkc1afcd7a6a9a572716155e33c32e7def81650b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244312    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:27:34.244340    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:27:34.244378    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:27:34.244398    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:27:34.244416    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:27:34.244448    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:27:34.244486    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:27:34.244521    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:27:34.244615    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:27:34.244666    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:27:34.244675    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:27:34.244748    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:27:34.244776    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:27:34.244831    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:27:34.244909    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:34.244942    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.244990    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.245007    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.245522    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:27:34.267677    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:27:34.287348    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:27:34.309971    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:27:34.330910    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 10:27:34.350036    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:27:34.370663    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:27:34.390457    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:27:34.410226    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:27:34.431025    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:27:34.451232    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:27:34.471133    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:27:34.487758    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:27:34.493769    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:27:34.506308    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511941    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511996    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.519851    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:27:34.531120    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:27:34.540803    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544302    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544341    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.548724    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:27:34.558817    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:27:34.568088    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571692    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571731    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.575999    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:27:34.585057    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:27:34.588207    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:27:34.588251    4789 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:34.588345    4789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:27:34.601241    4789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:27:34.609838    4789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:27:34.618794    4789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:27:34.627200    4789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:27:34.627208    4789 kubeadm.go:157] found existing configuration files:
	
	I0819 10:27:34.627243    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:27:34.635162    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:27:34.635198    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:27:34.643336    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:27:34.651247    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:27:34.651280    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:27:34.659346    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.667240    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:27:34.667281    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.675386    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:27:34.684053    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:27:34.684105    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:27:34.692357    4789 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 10:27:34.751991    4789 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:27:34.752160    4789 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:27:34.833970    4789 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:27:34.834062    4789 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:27:34.834153    4789 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:27:34.842513    4789 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:27:34.863067    4789 out.go:235]   - Generating certificates and keys ...
	I0819 10:27:34.863126    4789 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:27:34.863179    4789 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:27:35.003012    4789 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:27:35.766829    4789 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:27:35.976153    4789 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:27:36.134850    4789 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:27:36.228947    4789 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:27:36.229166    4789 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.375842    4789 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:27:36.375934    4789 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.597289    4789 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:27:36.907219    4789 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:27:37.426404    4789 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:27:37.426585    4789 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:27:37.566387    4789 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:27:38.000620    4789 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:27:38.121335    4789 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:27:38.179042    4789 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:27:38.231270    4789 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:27:38.231752    4789 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:27:38.233818    4789 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:27:38.255454    4789 out.go:235]   - Booting up control plane ...
	I0819 10:27:38.255535    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:27:38.255605    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:27:38.255655    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:27:38.255734    4789 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:27:38.255809    4789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:27:38.255842    4789 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:27:38.364951    4789 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:27:38.365069    4789 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:27:39.366309    4789 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001984632s
	I0819 10:27:39.366388    4789 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:27:45.029099    4789 kubeadm.go:310] [api-check] The API server is healthy after 5.666724975s
	I0819 10:27:45.039440    4789 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:27:45.046481    4789 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:27:45.059797    4789 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:27:45.059959    4789 kubeadm.go:310] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:27:45.067482    4789 kubeadm.go:310] [bootstrap-token] Using token: rrr6yu.ivgebthw63l7ehzv
	I0819 10:27:45.106820    4789 out.go:235]   - Configuring RBAC rules ...
	I0819 10:27:45.107004    4789 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:27:45.110638    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:27:45.151902    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:27:45.154406    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:27:45.156223    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:27:45.158190    4789 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:27:45.434935    4789 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:27:45.846068    4789 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:27:46.434136    4789 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:27:46.434675    4789 kubeadm.go:310] 
	I0819 10:27:46.434724    4789 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:27:46.434728    4789 kubeadm.go:310] 
	I0819 10:27:46.434798    4789 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:27:46.434808    4789 kubeadm.go:310] 
	I0819 10:27:46.434829    4789 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:27:46.434881    4789 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:27:46.434925    4789 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:27:46.434930    4789 kubeadm.go:310] 
	I0819 10:27:46.434974    4789 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:27:46.434984    4789 kubeadm.go:310] 
	I0819 10:27:46.435035    4789 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:27:46.435041    4789 kubeadm.go:310] 
	I0819 10:27:46.435080    4789 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:27:46.435139    4789 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:27:46.435197    4789 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:27:46.435204    4789 kubeadm.go:310] 
	I0819 10:27:46.435268    4789 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:27:46.435333    4789 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:27:46.435337    4789 kubeadm.go:310] 
	I0819 10:27:46.435410    4789 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435498    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd \
	I0819 10:27:46.435515    4789 kubeadm.go:310] 	--control-plane 
	I0819 10:27:46.435520    4789 kubeadm.go:310] 
	I0819 10:27:46.435589    4789 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:27:46.435594    4789 kubeadm.go:310] 
	I0819 10:27:46.435664    4789 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435746    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd 
	I0819 10:27:46.435997    4789 kubeadm.go:310] W0819 17:27:34.545490    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436229    4789 kubeadm.go:310] W0819 17:27:34.546600    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436316    4789 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:27:46.436331    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:46.436337    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:46.458203    4789 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 10:27:46.517773    4789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 10:27:46.523858    4789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 10:27:46.523872    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 10:27:46.539513    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 10:27:46.759807    4789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:27:46.759878    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:46.759883    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0819 10:27:46.777623    4789 ops.go:34] apiserver oom_adj: -16
	I0819 10:27:46.926523    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.427175    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.927281    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.428033    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.926686    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.426608    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.926666    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:50.010199    4789 kubeadm.go:1113] duration metric: took 3.25030545s to wait for elevateKubeSystemPrivileges
	I0819 10:27:50.010216    4789 kubeadm.go:394] duration metric: took 15.42163041s to StartCluster
	I0819 10:27:50.010227    4789 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.010325    4789 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.010762    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.011021    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:27:50.011033    4789 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.011050    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:27:50.011076    4789 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:27:50.011116    4789 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0819 10:27:50.011120    4789 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0819 10:27:50.011148    4789 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0819 10:27:50.011152    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.011155    4789 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0819 10:27:50.011186    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.011415    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011420    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011430    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.011431    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.020667    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51134
	I0819 10:27:50.021171    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021230    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51136
	I0819 10:27:50.021523    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021533    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.021634    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021753    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.021940    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.022115    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.022146    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.022229    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.022806    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.022988    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.023051    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.024924    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.025156    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:27:50.025529    4789 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:27:50.025699    4789 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0819 10:27:50.025720    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.025937    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.025963    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.031229    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51138
	I0819 10:27:50.031604    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.031942    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.031953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.032154    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.032270    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.032358    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.032435    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.033436    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.034958    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51140
	I0819 10:27:50.035269    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.035586    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.035596    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.035796    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.036148    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.036165    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.044937    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51142
	I0819 10:27:50.045312    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.045667    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.045680    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.045893    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.045996    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.046077    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.046151    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.047102    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.047225    4789 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.047234    4789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:27:50.047243    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.047325    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.047417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.047495    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.047571    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.056055    4789 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:27:50.076134    4789 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.076146    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:27:50.076163    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.076310    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.076417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.076556    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.076664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.113554    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:27:50.127003    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.262022    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.488277    4789 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0819 10:27:50.488318    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488327    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488534    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488547    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488556    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488563    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488564    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488681    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488704    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488718    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488767    4789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:27:50.488780    4789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:27:50.488862    4789 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 10:27:50.488867    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.488877    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.488882    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495057    4789 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:27:50.495477    4789 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 10:27:50.495484    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.495490    4789 round_trippers.go:473]     Content-Type: application/json
	I0819 10:27:50.495494    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.498504    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:27:50.498632    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.498641    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.498797    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.498806    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.498814    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649595    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649607    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.649833    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.649843    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649848    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.649874    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649893    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.650019    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.650028    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.650044    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.673040    4789 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 10:27:50.709732    4789 addons.go:510] duration metric: took 698.654107ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 10:27:50.709774    4789 start.go:246] waiting for cluster config update ...
	I0819 10:27:50.709799    4789 start.go:255] writing updated cluster config ...
	I0819 10:27:50.746763    4789 out.go:201] 
	I0819 10:27:50.768467    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.768565    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.790908    4789 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:27:50.832651    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:50.832673    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:50.832790    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:50.832801    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:50.832852    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.833261    4789 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:50.833314    4789 start.go:364] duration metric: took 41.162µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:27:50.833329    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.833382    4789 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0819 10:27:50.854688    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:50.854833    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.854870    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.864309    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51147
	I0819 10:27:50.864640    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.864951    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.864963    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.865175    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.865294    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:27:50.865374    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:27:50.865472    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:50.865485    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:50.865515    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:50.865553    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865565    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865607    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:50.865634    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865649    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865666    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:50.865676    4789 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0819 10:27:50.865754    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.865776    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:27:50.891966    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:50.891987    4789 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0819 10:27:50.892145    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.892330    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:50.892137    4845 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:50.892421    4789 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:51.078705    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.078630    4845 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa...
	I0819 10:27:51.171843    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.171751    4845 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0819 10:27:51.171860    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0819 10:27:51.171868    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0819 10:27:51.172685    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.172591    4845 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02 ...
	I0819 10:27:51.544884    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.544910    4789 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:27:51.544922    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:27:51.571631    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:27:51.571653    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:51.571680    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571706    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571739    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:51.571767    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:51.571780    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:51.574668    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Pid is 4850
	I0819 10:27:51.575734    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:27:51.575757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.575783    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:51.576702    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:51.576759    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:51.576778    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:51.576816    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:51.576830    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:51.576844    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:51.582262    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:51.590515    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:51.591362    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:51.591388    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:51.591397    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:51.591407    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:51.978930    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:51.978947    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:52.094059    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:52.094091    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:52.094127    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:52.094142    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:52.094869    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:52.094879    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:53.577521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0819 10:27:53.577541    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:53.577636    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:53.578446    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:53.578461    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:53.578472    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:53.578481    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:53.578489    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:53.578507    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:55.579485    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0819 10:27:55.579501    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:55.579576    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:55.580358    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:55.580387    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:55.580414    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:55.580426    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:55.580434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:55.580442    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.581588    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0819 10:27:57.581603    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:57.581681    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:57.582486    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:57.582510    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:57.582521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:57.582530    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:57.582540    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:57.582548    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.680321    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:27:57.680434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:27:57.680445    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:27:57.704982    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:27:59.583757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0819 10:27:59.583772    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:59.583842    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:59.584652    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:59.584696    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:59.584710    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:59.584720    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:59.584729    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:59.584737    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:28:01.585137    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0819 10:28:01.585154    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.585235    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.585996    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:28:01.586042    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:28:01.586055    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:28:01.586080    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:28:01.586086    4789 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:28:01.586098    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:01.586694    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586794    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586889    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:28:01.586896    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:28:01.586980    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.587029    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.587790    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:28:01.587796    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:28:01.587800    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:28:01.587804    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:01.587881    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:01.587956    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588060    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588138    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:01.588256    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:01.588435    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:01.588443    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:28:02.645180    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.645193    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:28:02.645198    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.645326    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.645422    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645501    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645583    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.645718    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.645869    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.645877    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:28:02.700961    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:28:02.700992    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:28:02.700998    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:28:02.701003    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701132    4789 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:28:02.701143    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701237    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.701327    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.701424    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701502    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701588    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.701720    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.701855    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.701864    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:28:02.773500    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:28:02.773515    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.773649    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.773737    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773840    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773945    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.774071    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.774226    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.774237    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:28:02.838956    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.838971    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:28:02.838984    4789 buildroot.go:174] setting up certificates
	I0819 10:28:02.838992    4789 provision.go:84] configureAuth start
	I0819 10:28:02.838998    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.839135    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:02.839223    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.839322    4789 provision.go:143] copyHostCerts
	I0819 10:28:02.839347    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839393    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:28:02.839399    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839532    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:28:02.839738    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839769    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:28:02.839774    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839845    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:28:02.839992    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840021    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:28:02.840025    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840090    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:28:02.840244    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:28:02.878856    4789 provision.go:177] copyRemoteCerts
	I0819 10:28:02.878899    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:28:02.878912    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.879041    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.879132    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.879231    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.879330    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:02.914748    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:28:02.914819    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:28:02.934608    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:28:02.934673    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:28:02.954833    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:28:02.954900    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:28:02.974652    4789 provision.go:87] duration metric: took 135.649275ms to configureAuth
	I0819 10:28:02.974666    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:28:02.974809    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:02.974823    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:02.974958    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.975063    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.975147    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975219    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975328    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.975454    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.975601    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.975609    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:28:03.033628    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:28:03.033639    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:28:03.033715    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:28:03.033730    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.033861    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.033950    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034053    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034140    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.034264    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.034412    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.034459    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:28:03.102644    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:28:03.102663    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.102811    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.102898    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.102999    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.103120    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.103244    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.103390    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.103404    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:28:04.637367    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:28:04.637381    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:28:04.637388    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0819 10:28:04.637524    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:28:04.637530    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:28:04.637534    4789 client.go:171] duration metric: took 13.771742286s to LocalClient.Create
	I0819 10:28:04.637544    4789 start.go:167] duration metric: took 13.771771513s to libmachine.API.Create "ha-431000"
	I0819 10:28:04.637550    4789 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:28:04.637557    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:28:04.637566    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.637712    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:28:04.637723    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.637834    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.637926    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.638026    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.638127    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.678475    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:28:04.682965    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:28:04.682980    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:28:04.683079    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:28:04.683246    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:28:04.683253    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:28:04.683434    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:28:04.695086    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:04.723279    4789 start.go:296] duration metric: took 85.720185ms for postStartSetup
	I0819 10:28:04.723311    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:04.723943    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.724123    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:28:04.724446    4789 start.go:128] duration metric: took 13.890752069s to createHost
	I0819 10:28:04.724460    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.724558    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.724679    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724786    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724871    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.724979    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:04.725097    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:04.725103    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:28:04.784682    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088484.852271103
	
	I0819 10:28:04.784694    4789 fix.go:216] guest clock: 1724088484.852271103
	I0819 10:28:04.784698    4789 fix.go:229] Guest: 2024-08-19 10:28:04.852271103 -0700 PDT Remote: 2024-08-19 10:28:04.724454 -0700 PDT m=+55.319126445 (delta=127.817103ms)
	I0819 10:28:04.784725    4789 fix.go:200] guest clock delta is within tolerance: 127.817103ms
	I0819 10:28:04.784731    4789 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.951104834s
	I0819 10:28:04.784750    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.784884    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.807240    4789 out.go:177] * Found network options:
	I0819 10:28:04.829600    4789 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:28:04.851548    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.851607    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852495    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852747    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852876    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:28:04.852915    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:28:04.852962    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.853080    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:28:04.853100    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.853127    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853372    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853394    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853596    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853633    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853742    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853804    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.853880    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:28:04.886788    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:28:04.886847    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:28:04.931189    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:28:04.931209    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:04.931315    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:04.947443    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:28:04.955693    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:28:04.964155    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:28:04.964197    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:28:04.972493    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.980548    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:28:04.988709    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.996856    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:28:05.005271    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:28:05.013575    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:28:05.021801    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:28:05.030285    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:28:05.037842    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:28:05.045332    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.140730    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:28:05.159555    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:05.159625    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:28:05.177222    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.189624    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:28:05.203743    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.214606    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.224836    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:28:05.249649    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.261132    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:05.276191    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:28:05.279129    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:28:05.287175    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:28:05.300748    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:28:05.396444    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:28:05.505778    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:28:05.505805    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:28:05.520914    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.616215    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:28:07.911303    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.295016426s)
	I0819 10:28:07.911366    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:28:07.923467    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:28:07.938312    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:07.949283    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:28:08.046922    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:28:08.152880    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.256594    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:28:08.271339    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:08.283089    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.384798    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:28:08.441813    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:28:08.441881    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:28:08.446421    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:28:08.446473    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:28:08.449807    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:28:08.479621    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:28:08.479690    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.496571    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.537488    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:28:08.579078    4789 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:28:08.603340    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:08.603786    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:28:08.608372    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:08.618166    4789 mustload.go:65] Loading cluster: ha-431000
	I0819 10:28:08.618314    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:08.618533    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.618549    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.627122    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51170
	I0819 10:28:08.627459    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.627845    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.627857    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.628097    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.628239    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:28:08.628342    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:08.628430    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:28:08.629353    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:08.629592    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.629608    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.638041    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51172
	I0819 10:28:08.638388    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.638753    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.638770    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.638992    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.639108    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:08.639209    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:28:08.639216    4789 certs.go:194] generating shared ca certs ...
	I0819 10:28:08.639225    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.639357    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:28:08.639425    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:28:08.639434    4789 certs.go:256] generating profile certs ...
	I0819 10:28:08.639538    4789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:28:08.639562    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788
	I0819 10:28:08.639575    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0819 10:28:08.693749    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 ...
	I0819 10:28:08.693766    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788: {Name:mkade16cb35e521e9e55fc42d7cb129c8b94b782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694149    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 ...
	I0819 10:28:08.694160    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788: {Name:mkeae0a28d48da45f84299952289f15db5f944f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694378    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:28:08.694703    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:28:08.694954    4789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:28:08.694964    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:28:08.694987    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:28:08.695006    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:28:08.695024    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:28:08.695042    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:28:08.695060    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:28:08.695078    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:28:08.695096    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:28:08.695175    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:28:08.695213    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:28:08.695228    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:28:08.695261    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:28:08.695290    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:28:08.695321    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:28:08.695400    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:08.695438    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:28:08.695462    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:28:08.695482    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:08.695511    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:08.695664    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:08.695745    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:08.695845    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:08.695925    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:08.729193    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:28:08.736181    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:28:08.748665    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:28:08.751826    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:28:08.773481    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:28:08.777252    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:28:08.787546    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:28:08.791015    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:28:08.800105    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:28:08.803218    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:28:08.812240    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:28:08.815351    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:28:08.824083    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:28:08.844052    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:28:08.864107    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:28:08.884612    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:28:08.904284    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 10:28:08.924397    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:28:08.944026    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:28:08.964689    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:28:08.984934    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:28:09.004413    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:28:09.024043    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:28:09.043924    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:28:09.058066    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:28:09.071585    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:28:09.085080    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:28:09.098536    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:28:09.112048    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:28:09.125242    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:28:09.139717    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:28:09.144032    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:28:09.152602    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.155967    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.156009    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.160192    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:28:09.168568    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:28:09.176997    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180533    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180568    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.184799    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:28:09.193356    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:28:09.201811    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205453    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205494    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.209760    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:28:09.218392    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:28:09.222392    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:28:09.222437    4789 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:28:09.222498    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:28:09.222516    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:28:09.222559    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:28:09.234408    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:28:09.234452    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:28:09.234506    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.242939    4789 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:28:09.242994    4789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0819 10:28:09.251336    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 10:28:11.797289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:28:11.809069    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.809192    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.812267    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:28:11.812291    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:28:12.469259    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.469340    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.472845    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:28:12.472869    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:28:13.348737    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.348820    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.352429    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:28:13.352449    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:28:13.542994    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:28:13.550937    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:28:13.564187    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:28:13.577654    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:28:13.591433    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:28:13.594347    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:13.604347    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:13.710422    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:13.730131    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:13.730407    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:13.730448    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:13.739474    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51199
	I0819 10:28:13.739816    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:13.740174    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:13.740190    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:13.740438    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:13.740564    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:13.740661    4789 start.go:317] joinCluster: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:28:13.740750    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 10:28:13.740767    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:13.740857    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:13.740939    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:13.741027    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:13.741101    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:13.815525    4789 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:13.815563    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0819 10:28:41.108330    4789 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (27.292143754s)
	I0819 10:28:41.108351    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 10:28:41.504714    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000-m02 minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=false
	I0819 10:28:41.585348    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-431000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 10:28:41.693283    4789 start.go:319] duration metric: took 27.951997328s to joinCluster
	I0819 10:28:41.693326    4789 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:41.693537    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:41.715528    4789 out.go:177] * Verifying Kubernetes components...
	I0819 10:28:41.790354    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:41.995139    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:42.017369    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:28:42.017608    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:28:42.017650    4789 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:28:42.017827    4789 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:42.017919    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.017925    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.017930    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.017935    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.025432    4789 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 10:28:42.518902    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.518917    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.518923    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.518927    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.521742    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:43.018396    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.018411    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.018417    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.018421    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.021454    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:43.518031    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.518083    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.518106    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.518116    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.522999    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:44.018193    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.018219    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.018231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.018237    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.021854    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:44.022387    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:44.518152    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.518189    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.518196    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.518199    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.520027    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.019772    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.019792    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.019799    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.019803    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.021628    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.518039    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.518053    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.518059    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.518064    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.520113    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:46.018198    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.018232    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.018239    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.018243    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.020136    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.518474    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.518490    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.518496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.518499    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.520505    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.520916    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:47.019124    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.019150    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.019162    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.022729    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:47.518316    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.518341    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.518351    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.518356    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.520471    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:48.019594    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.019620    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.019630    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.019636    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.023447    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:48.518492    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.518526    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.518583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.518593    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.523421    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:48.523787    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:49.019217    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.019242    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.019254    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.019260    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.022862    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:49.520299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.520324    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.520337    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.520342    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.523532    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.019383    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.019412    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.019424    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.019430    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.022847    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.519489    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.519503    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.519511    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.519515    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.522131    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:51.019130    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.019153    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.019163    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.022497    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:51.022894    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:51.518391    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.518448    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.518465    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.518476    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.521848    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.019014    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.019045    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.019103    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.019117    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.022339    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.519630    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.519644    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.519651    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.519655    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.522019    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.018435    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.018460    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.018472    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.018480    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.021850    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:53.518299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.518340    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.518349    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.518355    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.520795    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.521268    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:54.020380    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.020406    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.020418    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.020423    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.024178    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:54.519346    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.519364    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.519383    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.519387    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.521155    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:55.020400    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.020425    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.020437    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.020444    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.024326    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:55.519229    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.519245    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.519264    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.519268    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.521435    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:55.521852    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:56.019678    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.019703    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.019714    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.019719    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.023317    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:56.518539    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.518563    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.518576    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.518581    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.521781    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.020424    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.020449    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.020460    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.020465    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.024114    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.519399    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.519428    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.519468    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.519475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.522788    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.523223    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:58.018734    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.018759    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.018770    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.018777    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.022242    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.518348    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.518359    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.518371    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.518375    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.522907    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.523168    4789 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:28:58.523182    4789 node_ready.go:38] duration metric: took 16.504973252s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:58.523189    4789 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:28:58.523237    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:28:58.523243    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.523249    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.523253    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.528083    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.532699    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.532761    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:28:58.532768    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.532774    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.532776    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.535978    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.536344    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.536351    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.536358    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.536361    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.538061    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.538368    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.538377    4789 pod_ready.go:82] duration metric: took 5.660556ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538383    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538413    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:28:58.538417    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.538423    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.538428    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.540013    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.540457    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.540465    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.540471    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.540475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.542120    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.542393    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.542400    4789 pod_ready.go:82] duration metric: took 4.011453ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542406    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542439    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:28:58.542444    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.542449    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.542454    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.543986    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.544340    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.544347    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.544353    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.544356    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.545868    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.546173    4789 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.546181    4789 pod_ready.go:82] duration metric: took 3.769725ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546187    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546221    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:28:58.546226    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.546231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.546234    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.547638    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.548110    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.548118    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.548123    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.548127    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.549514    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.549853    4789 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.549860    4789 pod_ready.go:82] duration metric: took 3.668598ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.549868    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.718822    4789 request.go:632] Waited for 168.888912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718861    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718867    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.718872    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.718876    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.721032    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:58.919673    4789 request.go:632] Waited for 198.011193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919731    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919740    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.919750    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.919807    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.923236    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.923670    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.923682    4789 pod_ready.go:82] duration metric: took 373.799986ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.923691    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.119399    4789 request.go:632] Waited for 195.629207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119559    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119572    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.119583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.119589    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.122804    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.318619    4789 request.go:632] Waited for 195.030736ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318674    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318695    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.318702    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.318705    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.320812    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.321165    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.321173    4789 pod_ready.go:82] duration metric: took 397.466691ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.321180    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.520541    4789 request.go:632] Waited for 199.292765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520642    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520652    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.520663    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.520672    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.524463    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.718728    4789 request.go:632] Waited for 192.615056ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718803    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718811    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.718818    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.718823    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.720955    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.721397    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.721407    4789 pod_ready.go:82] duration metric: took 400.213219ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.721415    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.918907    4789 request.go:632] Waited for 197.434904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919004    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919014    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.919024    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.919030    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.922451    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.119192    4789 request.go:632] Waited for 196.220574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119263    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119272    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.119286    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.119297    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.122630    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.122957    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.122968    4789 pod_ready.go:82] duration metric: took 401.538458ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.122977    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.320524    4789 request.go:632] Waited for 197.475989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320660    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320672    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.320681    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.320689    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.323985    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.519403    4789 request.go:632] Waited for 194.628597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519535    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519546    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.519560    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.519568    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.523121    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.523435    4789 pod_ready.go:93] pod "kube-proxy-5h7j2" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.523449    4789 pod_ready.go:82] duration metric: took 400.456993ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.523457    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.718666    4789 request.go:632] Waited for 195.15054ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718742    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718752    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.718786    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.718800    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.721920    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.918782    4789 request.go:632] Waited for 196.40919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918873    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918882    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.918896    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.918906    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.922355    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.922815    4789 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.922824    4789 pod_ready.go:82] duration metric: took 399.351509ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.922830    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.118854    4789 request.go:632] Waited for 195.977175ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118950    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118965    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.118981    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.118987    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.122683    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.318886    4789 request.go:632] Waited for 195.887859ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319029    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319042    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.319053    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.319063    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.322689    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.323187    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.323200    4789 pod_ready.go:82] duration metric: took 400.355182ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.323208    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.518928    4789 request.go:632] Waited for 195.662505ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519043    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519057    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.519070    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.519077    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.522736    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.718819    4789 request.go:632] Waited for 195.65197ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718885    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718891    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.718899    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.718905    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.721246    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:29:01.721682    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.721691    4789 pod_ready.go:82] duration metric: took 398.467113ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.721701    4789 pod_ready.go:39] duration metric: took 3.198431164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:29:01.721718    4789 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:29:01.721774    4789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:29:01.735634    4789 api_server.go:72] duration metric: took 20.041851081s to wait for apiserver process to appear ...
	I0819 10:29:01.735647    4789 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:29:01.735663    4789 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:29:01.738815    4789 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:29:01.738848    4789 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:29:01.738854    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.738860    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.738864    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.739526    4789 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:29:01.739580    4789 api_server.go:141] control plane version: v1.31.0
	I0819 10:29:01.739589    4789 api_server.go:131] duration metric: took 3.937962ms to wait for apiserver health ...
	I0819 10:29:01.739594    4789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:29:01.918638    4789 request.go:632] Waited for 178.995687ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918733    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918745    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.918757    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.918762    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.922864    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:01.926606    4789 system_pods.go:59] 17 kube-system pods found
	I0819 10:29:01.926628    4789 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:01.926633    4789 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:01.926636    4789 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:01.926640    4789 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:01.926642    4789 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:01.926645    4789 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:01.926647    4789 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:01.926650    4789 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:01.926653    4789 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:01.926656    4789 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:01.926659    4789 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:01.926666    4789 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:01.926669    4789 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:01.926672    4789 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:01.926674    4789 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:01.926677    4789 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:01.926680    4789 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:01.926684    4789 system_pods.go:74] duration metric: took 187.080965ms to wait for pod list to return data ...
	I0819 10:29:01.926689    4789 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:29:02.119406    4789 request.go:632] Waited for 192.625822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119507    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119517    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.119528    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.119535    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.123120    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.123283    4789 default_sa.go:45] found service account: "default"
	I0819 10:29:02.123293    4789 default_sa.go:55] duration metric: took 196.595366ms for default service account to be created ...
	I0819 10:29:02.123300    4789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:29:02.319795    4789 request.go:632] Waited for 196.43255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319928    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319939    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.319947    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.319954    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.324586    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:02.328058    4789 system_pods.go:86] 17 kube-system pods found
	I0819 10:29:02.328071    4789 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:02.328075    4789 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:02.328078    4789 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:02.328083    4789 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:02.328086    4789 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:02.328088    4789 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:02.328091    4789 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:02.328093    4789 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:02.328096    4789 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:02.328098    4789 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:02.328101    4789 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:02.328103    4789 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:02.328106    4789 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:02.328109    4789 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:02.328112    4789 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:02.328115    4789 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:02.328117    4789 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:02.328122    4789 system_pods.go:126] duration metric: took 204.813151ms to wait for k8s-apps to be running ...
	I0819 10:29:02.328133    4789 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:29:02.328183    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:29:02.340002    4789 system_svc.go:56] duration metric: took 11.865981ms WaitForService to wait for kubelet
	I0819 10:29:02.340017    4789 kubeadm.go:582] duration metric: took 20.646222268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:29:02.340034    4789 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:29:02.518831    4789 request.go:632] Waited for 178.726274ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518969    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518980    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.518991    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.518998    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.522659    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.523326    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523339    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523348    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523351    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523354    4789 node_conditions.go:105] duration metric: took 183.311856ms to run NodePressure ...
	I0819 10:29:02.523361    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:29:02.523378    4789 start.go:255] writing updated cluster config ...
	I0819 10:29:02.544110    4789 out.go:201] 
	I0819 10:29:02.566227    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:02.566358    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.588965    4789 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:29:02.630777    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:29:02.630803    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:29:02.630953    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:29:02.630966    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:29:02.631053    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.631767    4789 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:29:02.631849    4789 start.go:364] duration metric: took 64.609µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:29:02.631869    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:29:02.631978    4789 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0819 10:29:02.652968    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:29:02.653116    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:29:02.653158    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:29:02.663539    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51204
	I0819 10:29:02.663925    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:29:02.664263    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:29:02.664277    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:29:02.664539    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:29:02.664672    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:02.664758    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:02.664867    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:29:02.664899    4789 client.go:168] LocalClient.Create starting
	I0819 10:29:02.664932    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:29:02.664992    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665005    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665051    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:29:02.665087    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665103    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665116    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:29:02.665122    4789 main.go:141] libmachine: (ha-431000-m03) Calling .PreCreateCheck
	I0819 10:29:02.665218    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.665228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:02.674109    4789 main.go:141] libmachine: Creating machine...
	I0819 10:29:02.674126    4789 main.go:141] libmachine: (ha-431000-m03) Calling .Create
	I0819 10:29:02.674302    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.674550    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.674293    4918 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:29:02.674675    4789 main.go:141] libmachine: (ha-431000-m03) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:29:02.956098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.955977    4918 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa...
	I0819 10:29:03.041212    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.041121    4918 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk...
	I0819 10:29:03.041230    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing magic tar header
	I0819 10:29:03.041239    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing SSH key tar header
	I0819 10:29:03.042098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.042003    4918 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03 ...
	I0819 10:29:03.582755    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.582783    4789 main.go:141] libmachine: (ha-431000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:29:03.582846    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:29:03.618942    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:29:03.618960    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:29:03.619021    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619049    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619085    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:29:03.619116    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:29:03.619133    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:29:03.621990    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Pid is 4921
	I0819 10:29:03.622461    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:29:03.622497    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.622585    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:03.623424    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:03.623486    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:03.623500    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:03.623537    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:03.623548    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:03.623558    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:03.623568    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:03.629643    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:29:03.638725    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:29:03.639577    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:03.639599    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:03.639609    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:03.639622    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.022361    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:29:04.022375    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:29:04.137228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:04.137262    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:04.137274    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:04.137284    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.138001    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:29:04.138016    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:29:05.623879    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 1
	I0819 10:29:05.623896    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:05.624023    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:05.624809    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:05.624873    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:05.624888    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:05.624904    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:05.624917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:05.624926    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:05.624935    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:07.626679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 2
	I0819 10:29:07.626696    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:07.626779    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:07.627539    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:07.627582    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:07.627592    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:07.627610    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:07.627619    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:07.627626    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:07.627635    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.627812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 3
	I0819 10:29:09.627828    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:09.627917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:09.628679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:09.628746    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:09.628777    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:09.628791    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:09.628799    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:09.628806    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:09.628812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.722721    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:29:09.722792    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:29:09.722802    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:29:09.745848    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:29:11.630390    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 4
	I0819 10:29:11.630407    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:11.630495    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:11.631275    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:11.631321    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:11.631331    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:11.631340    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:11.631359    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:11.631366    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:11.631387    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:13.633236    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 5
	I0819 10:29:13.633251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.633339    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.634147    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:13.634209    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0819 10:29:13.634221    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:29:13.634228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:29:13.634232    4789 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:29:13.634299    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:13.634943    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635064    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635157    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:29:13.635165    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:29:13.635251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.635310    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.636120    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:29:13.636129    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:29:13.636133    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:29:13.636138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:13.636228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:13.636309    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636392    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636477    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:13.636587    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:13.636755    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:13.636763    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:29:14.697546    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.697558    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:29:14.697564    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.697702    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.697798    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.697887    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.698009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.698168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.698318    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.698326    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:29:14.765778    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:29:14.765827    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:29:14.765833    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:29:14.765839    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.765977    4789 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:29:14.765988    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.766081    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.766185    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.766270    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766369    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766481    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.766635    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.766783    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.766792    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:29:14.841753    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:29:14.841769    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.841901    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.842009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842101    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842195    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.842324    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.842477    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.842489    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:29:14.911764    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.911779    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:29:14.911793    4789 buildroot.go:174] setting up certificates
	I0819 10:29:14.911800    4789 provision.go:84] configureAuth start
	I0819 10:29:14.911807    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.911942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:14.912037    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.912110    4789 provision.go:143] copyHostCerts
	I0819 10:29:14.912141    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912187    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:29:14.912193    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912326    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:29:14.912504    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912534    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:29:14.912539    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912651    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:29:14.912808    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912854    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:29:14.912859    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912935    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:29:14.913083    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:29:15.064390    4789 provision.go:177] copyRemoteCerts
	I0819 10:29:15.064440    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:29:15.064455    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.064599    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.064695    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.064786    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.064886    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:15.103656    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:29:15.103727    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:29:15.123430    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:29:15.123497    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:29:15.143265    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:29:15.143333    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:29:15.162885    4789 provision.go:87] duration metric: took 251.064942ms to configureAuth
	I0819 10:29:15.162900    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:29:15.163052    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:15.163065    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:15.163221    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.163329    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.163417    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163506    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.163693    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.163824    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.163831    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:29:15.225270    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:29:15.225286    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:29:15.225356    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:29:15.225368    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.225510    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.225619    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225708    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225810    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.225948    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.226090    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.226134    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:29:15.299640    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:29:15.299658    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.299797    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.299889    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.299978    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.300067    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.300202    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.300355    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.300368    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:29:16.819930    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:29:16.819945    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:29:16.819953    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetURL
	I0819 10:29:16.820095    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:29:16.820107    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:29:16.820113    4789 client.go:171] duration metric: took 14.154897138s to LocalClient.Create
	I0819 10:29:16.820124    4789 start.go:167] duration metric: took 14.154947877s to libmachine.API.Create "ha-431000"
	I0819 10:29:16.820129    4789 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:29:16.820136    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:29:16.820145    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.820288    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:29:16.820301    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.820396    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.820494    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.820582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.820664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:16.862693    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:29:16.866416    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:29:16.866431    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:29:16.866540    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:29:16.866725    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:29:16.866732    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:29:16.866944    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:29:16.874578    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:29:16.904910    4789 start.go:296] duration metric: took 84.771069ms for postStartSetup
	I0819 10:29:16.904942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:16.905569    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.905740    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:16.906122    4789 start.go:128] duration metric: took 14.273822612s to createHost
	I0819 10:29:16.906138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.906230    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.906303    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906387    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906475    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.906573    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:16.906690    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:16.906697    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:29:16.969389    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088556.958185685
	
	I0819 10:29:16.969401    4789 fix.go:216] guest clock: 1724088556.958185685
	I0819 10:29:16.969406    4789 fix.go:229] Guest: 2024-08-19 10:29:16.958185685 -0700 PDT Remote: 2024-08-19 10:29:16.906131 -0700 PDT m=+127.499217490 (delta=52.054685ms)
	I0819 10:29:16.969416    4789 fix.go:200] guest clock delta is within tolerance: 52.054685ms
	I0819 10:29:16.969419    4789 start.go:83] releasing machines lock for "ha-431000-m03", held for 14.337247496s
	I0819 10:29:16.969437    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.969573    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.992258    4789 out.go:177] * Found network options:
	I0819 10:29:17.014265    4789 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:29:17.037508    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.037542    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.037561    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038432    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038682    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038835    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:29:17.038873    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:29:17.038922    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.038957    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.039067    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:29:17.039087    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:17.039116    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039298    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039332    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039497    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039590    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039679    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039721    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:17.039809    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:29:17.074320    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:29:17.074385    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:29:17.120302    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:29:17.120318    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.120398    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.135851    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:29:17.144402    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:29:17.152735    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.152784    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:29:17.161185    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.169599    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:29:17.177908    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.186319    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:29:17.194967    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:29:17.203702    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:29:17.212228    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:29:17.220632    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:29:17.228164    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:29:17.235717    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.329551    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:29:17.348829    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.348909    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:29:17.363903    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.374976    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:29:17.393061    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.404238    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.414728    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:29:17.438632    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.449143    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.464536    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:29:17.467445    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:29:17.474809    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:29:17.488421    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:29:17.581504    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:29:17.684960    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.684980    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:29:17.699658    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.803979    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:30:18.773891    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.968555005s)
	I0819 10:30:18.774012    4789 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:30:18.808676    4789 out.go:201] 
	W0819 10:30:18.829152    4789 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:29:15 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570013158Z" level=info msg="Starting up"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570447745Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.572542412Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.584880924Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603137975Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603181724Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603219390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603233227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603303033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603338653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603471354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603509282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603521199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603528665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603591360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603811486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605351283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605389063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605504861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605538594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605610859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605677674Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607907354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607976584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607991948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608010711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608023403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608093276Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608724366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608874333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608913351Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608929178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608943960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608968346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609006571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609021660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609032833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609044499Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609055485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609066063Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609088279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609103865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609115537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609130257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609139734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609151164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609161605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609173829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609185591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609200246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609211000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609224200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609237871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609251525Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609296616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609316285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609327369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609362155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609478815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609512436Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609530768Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609541857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609553085Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609563545Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610591556Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610680787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610769049Z" level=info msg="containerd successfully booted in 0.026402s"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.601341697Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.606766805Z" level=info msg="Loading containers: start."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.688780306Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.769433920Z" level=info msg="Loading containers: done."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776749571Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776865122Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.804822251Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.805010917Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:29:16 ha-431000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.814047535Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:29:17 ha-431000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815466623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815881336Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815956644Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.816022765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:18 ha-431000-m03 dockerd[921]: time="2024-08-19T17:29:18.853267859Z" level=info msg="Starting up"
	Aug 19 17:30:18 ha-431000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:30:18.829235    4789 out.go:270] * 
	W0819 10:30:18.830413    4789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:30:18.888275    4789 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:28:07 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3745c7f8fb9ffda1a9528dbab0743afd132acd46a2634643d4b5a24035dc2e4/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/868ee98671e833d733f787480bd37f293c8c6eb8b4092a75c7b96c7993f5f451/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/74fd2f09b011aa0f318ae4259efd3f3d52dc61d0bd78f032481d1a46763eeaae/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.132794637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133043856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133186443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133435141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139175494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139344496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139355701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139421519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157876304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157962624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157975535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.158198941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621287999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621447365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621465217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621560978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6d38fc70c811c9647892071fd07ef2e6455806b20e204cd6583df80c81ba64b7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 19 17:30:23 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040175789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040258993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040272849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040810082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da6e4a61b6cf8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   6d38fc70c811c       busybox-7dff88458-x7m6m
	b9d1bccf00c94       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   74fd2f09b011a       coredns-6f6b679f8f-hr2qx
	e7cacf032435f       6e38f40d628db                                                                                         14 minutes ago      Running             storage-provisioner       0                   868ee98671e83       storage-provisioner
	a3891ab602da5       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   c3745c7f8fb9f       coredns-6f6b679f8f-vc76p
	37cd2e9ed2f34       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              15 minutes ago      Running             kindnet-cni               0                   568b6f1ff9aaf       kindnet-lvdbg
	889ab608901bb       ad83b2ca7b09e                                                                                         15 minutes ago      Running             kube-proxy                0                   fde7b27c3d1a5       kube-proxy-5l56s
	ed733554ed160       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   90ec229d87c2c       kube-vip-ha-431000
	11d9cd3b2f49f       1766f54c897f0                                                                                         15 minutes ago      Running             kube-scheduler            0                   4c252909f338f       kube-scheduler-ha-431000
	262471364c991       604f5db92eaa8                                                                                         15 minutes ago      Running             kube-apiserver            0                   5a0fe916eaf1d       kube-apiserver-ha-431000
	39fe08877284d       2e96e5913fc06                                                                                         15 minutes ago      Running             etcd                      0                   fc30d54d1b565       etcd-ha-431000
	2801f8f44773b       045733566833c                                                                                         15 minutes ago      Running             kube-controller-manager   0                   80d21805f230b       kube-controller-manager-ha-431000
	
	
	==> coredns [a3891ab602da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40841 - 35632 "HINFO IN 8043641794425982319.4992720317295253252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008506209s
	[INFO] 10.244.1.2:51889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132717s
	[INFO] 10.244.1.2:37985 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001601417s
	[INFO] 10.244.1.2:55682 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.007910651s
	[INFO] 10.244.0.4:38616 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.000569215s
	[INFO] 10.244.0.4:47772 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000054313s
	[INFO] 10.244.1.2:49768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135774s
	[INFO] 10.244.1.2:55729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00095124s
	[INFO] 10.244.1.2:38602 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000089444s
	[INFO] 10.244.1.2:52875 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099022s
	[INFO] 10.244.1.2:49308 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063848s
	[INFO] 10.244.0.4:57863 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000064923s
	[INFO] 10.244.0.4:40409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096347s
	[INFO] 10.244.1.2:34617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084305s
	[INFO] 10.244.1.2:55843 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058734s
	[INFO] 10.244.0.4:43213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096675s
	[INFO] 10.244.0.4:44050 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031036s
	[INFO] 10.244.1.2:49077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105574s
	[INFO] 10.244.1.2:57560 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000084227s
	[INFO] 10.244.1.2:40959 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135434s
	
	
	==> coredns [b9d1bccf00c9] <==
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54195 - 29045 "HINFO IN 6513715404119561949.1799819676960271336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.007921235s
	[INFO] 10.244.1.2:45210 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.055498798s
	[INFO] 10.244.0.4:53730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111076s
	[INFO] 10.244.0.4:51704 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000411643s
	[INFO] 10.244.1.2:54559 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088744s
	[INFO] 10.244.1.2:58642 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064137s
	[INFO] 10.244.1.2:34281 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000845538s
	[INFO] 10.244.0.4:53439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000058375s
	[INFO] 10.244.0.4:33951 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106207s
	[INFO] 10.244.0.4:38202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000034691s
	[INFO] 10.244.0.4:46478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119286s
	[INFO] 10.244.0.4:53704 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053613s
	[INFO] 10.244.0.4:42766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051163s
	[INFO] 10.244.1.2:44413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116167s
	[INFO] 10.244.1.2:58453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067066s
	[INFO] 10.244.0.4:37472 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063597s
	[INFO] 10.244.0.4:59559 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033396s
	[INFO] 10.244.1.2:59906 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000120736s
	[INFO] 10.244.0.4:47175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120659s
	[INFO] 10.244.0.4:56722 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121072s
	[INFO] 10.244.0.4:43652 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174608s
	[INFO] 10.244.0.4:32818 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00017028s
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:27:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:42:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:28:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7b5b85e2c64405f969f3e24eb671b2e
	  System UUID:                7f844fbb-0000-0000-b5d6-699bdfe1640c
	  Boot ID:                    cb211998-dc9c-4fd5-a169-3f6eeb2403fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x7m6m              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-hr2qx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-vc76p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-431000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-lvdbg                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-431000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-431000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-5l56s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-431000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-431000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeReady                14m                kubelet          Node ha-431000 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	
	
	Name:               ha-431000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:28:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:42:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-431000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 21fb6f298fbf435c88fd6e9f9b50e04f
	  System UUID:                decf4e23-0000-0000-95db-084dbcc69753
	  Boot ID:                    330a7904-5229-4d07-9792-de118102386c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2l9lq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-431000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-qmgqd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-431000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-431000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-5h7j2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-431000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-431000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	
	
	Name:               ha-431000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_42_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:42:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:42:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:42:52 +0000   Mon, 19 Aug 2024 17:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:42:52 +0000   Mon, 19 Aug 2024 17:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:42:52 +0000   Mon, 19 Aug 2024 17:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:42:52 +0000   Mon, 19 Aug 2024 17:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-431000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e639484a1c98402fa6d9e2bb5fe71e03
	  System UUID:                c32a4140-0000-0000-838a-ef53ae6c724a
	  Boot ID:                    65e77bd5-3b1f-49d0-a224-e0cd2d7b346a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wfcpq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-kcrzx              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      29s
	  kube-system                 kube-proxy-2fn5w           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  29s (x2 over 29s)  kubelet          Node ha-431000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x2 over 29s)  kubelet          Node ha-431000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x2 over 29s)  kubelet          Node ha-431000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s                node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  RegisteredNode           25s                node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  NodeReady                6s                 kubelet          Node ha-431000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.712596] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.230971] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.519395] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.106046] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.754357] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.260100] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.108326] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.116397] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.050322] kauditd_printk_skb: 139 callbacks suppressed
	[  +2.370658] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.100232] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.114416] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.133019] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.706453] systemd-fstab-generator[1261]: Ignoring "noauto" option for root device
	[  +0.055873] kauditd_printk_skb: 136 callbacks suppressed
	[  +2.542020] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +4.524199] systemd-fstab-generator[1651]: Ignoring "noauto" option for root device
	[  +0.058523] kauditd_printk_skb: 70 callbacks suppressed
	[  +7.145787] systemd-fstab-generator[2146]: Ignoring "noauto" option for root device
	[  +0.090131] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.001426] kauditd_printk_skb: 35 callbacks suppressed
	[Aug19 17:28] kauditd_printk_skb: 15 callbacks suppressed
	[ +36.695422] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [39fe08877284] <==
	{"level":"info","ts":"2024-08-19T17:28:39.577230Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577486Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577607Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577632Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858","remote-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-19T17:28:39.577678Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577764Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577976Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.578023Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.582369Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T17:28:40.582407Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.582418Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.596476Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.597370Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T17:28:40.597585Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.605913Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:41.107824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860 13991592590719088728)"}
	{"level":"info","ts":"2024-08-19T17:28:41.107895Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-19T17:28:41.107911Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:32:31.484329Z","caller":"traceutil/trace.go:171","msg":"trace[1768622606] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"105.97642ms","start":"2024-08-19T17:32:31.378330Z","end":"2024-08-19T17:32:31.484306Z","steps":["trace[1768622606] 'process raft request'  (duration: 69.010204ms)","trace[1768622606] 'compare'  (duration: 36.887791ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:37:40.726136Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1233}
	{"level":"info","ts":"2024-08-19T17:37:40.747676Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1233,"took":"20.998439ms","hash":1199177849,"current-db-size-bytes":3051520,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-19T17:37:40.747929Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1199177849,"revision":1233,"compact-revision":-1}
	{"level":"info","ts":"2024-08-19T17:42:40.732325Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1862}
	{"level":"info","ts":"2024-08-19T17:42:40.746963Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1862,"took":"13.211731ms","hash":857120674,"current-db-size-bytes":3051520,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1675264,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-19T17:42:40.747021Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":857120674,"revision":1862,"compact-revision":1233}
	
	
	==> kernel <==
	 17:42:58 up 15 min,  0 users,  load average: 0.05, 0.12, 0.09
	Linux ha-431000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [37cd2e9ed2f3] <==
	I0819 17:42:13.913166       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:42:13.913176       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:23.920285       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:42:23.920446       1 main.go:299] handling current node
	I0819 17:42:23.920502       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:42:23.920520       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:33.912776       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:42:33.912941       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:33.913148       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:42:33.913243       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:42:33.913373       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.8 Flags: [] Table: 0} 
	I0819 17:42:33.913565       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:42:33.913609       1 main.go:299] handling current node
	I0819 17:42:43.915583       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:42:43.915670       1 main.go:299] handling current node
	I0819 17:42:43.915684       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:42:43.915691       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:43.915840       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:42:43.915938       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:42:53.914225       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:42:53.914609       1 main.go:299] handling current node
	I0819 17:42:53.914774       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:42:53.914814       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:53.914944       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:42:53.915297       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [262471364c99] <==
	I0819 17:27:42.843862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0819 17:27:42.851035       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0819 17:27:42.851176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 17:27:43.131229       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 17:27:43.156609       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 17:27:43.228677       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 17:27:43.232630       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0819 17:27:43.233263       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:27:43.235521       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 17:27:43.816793       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 17:27:45.642805       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 17:27:45.648554       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 17:27:45.656204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 17:27:49.372173       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 17:27:49.521616       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0819 17:41:58.471372       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51273: use of closed network connection
	E0819 17:41:58.792809       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51278: use of closed network connection
	E0819 17:41:58.976708       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51280: use of closed network connection
	E0819 17:41:59.288867       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51285: use of closed network connection
	E0819 17:41:59.474614       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51287: use of closed network connection
	E0819 17:41:59.785950       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51292: use of closed network connection
	E0819 17:42:02.821757       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51320: use of closed network connection
	E0819 17:42:03.005704       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51322: use of closed network connection
	E0819 17:42:03.316458       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51327: use of closed network connection
	E0819 17:42:03.527436       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51329: use of closed network connection
	
	
	==> kube-controller-manager [2801f8f44773] <==
	I0819 17:40:53.216070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:41:01.735584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000"
	E0819 17:42:29.279378       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-nzg89 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-nzg89\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0819 17:42:29.577488       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-431000-m04\" does not exist"
	I0819 17:42:29.587389       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-431000-m04" podCIDRs=["10.244.2.0/24"]
	I0819 17:42:29.587695       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:29.587776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:29.597406       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:29.968943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:30.304809       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:30.365012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.195µs"
	I0819 17:42:32.043252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:33.778806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:33.779606       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-431000-m04"
	I0819 17:42:33.857848       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:39.645314       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:52.547283       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:52.548660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-431000-m04"
	I0819 17:42:52.555756       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:52.559687       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.641µs"
	I0819 17:42:52.568999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.897µs"
	I0819 17:42:52.574921       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.923µs"
	I0819 17:42:53.790919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:54.429233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.87659ms"
	I0819 17:42:54.429711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.036µs"
	
	
	==> kube-proxy [889ab608901b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:27:50.162614       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:27:50.171417       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0819 17:27:50.171450       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:27:50.239161       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:27:50.239202       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:27:50.239220       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:27:50.242102       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:27:50.242306       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:27:50.242335       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:27:50.253458       1 config.go:197] "Starting service config controller"
	I0819 17:27:50.253497       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:27:50.253518       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:27:50.253542       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:27:50.253889       1 config.go:326] "Starting node config controller"
	I0819 17:27:50.253915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:27:50.354735       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:27:50.354788       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:27:50.354817       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11d9cd3b2f49] <==
	W0819 17:27:42.867998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:27:42.868077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.900445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:27:42.900541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.970545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:27:42.970765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:43.004003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:27:43.004103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:27:43.339820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:30:22.272037       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	E0819 17:30:22.273195       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e37fe27d-f1bf-427d-a76d-96722b0c74a1(default/busybox-7dff88458-x7m6m) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x7m6m"
	E0819 17:30:22.273433       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" pod="default/busybox-7dff88458-x7m6m"
	I0819 17:30:22.273582       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	E0819 17:42:29.626807       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kcrzx\": pod kindnet-kcrzx is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kcrzx" node="ha-431000-m04"
	E0819 17:42:29.626857       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4d8e74ea-456c-476b-951f-c880eb642788(kube-system/kindnet-kcrzx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kcrzx"
	E0819 17:42:29.626868       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kcrzx\": pod kindnet-kcrzx is already assigned to node \"ha-431000-m04\"" pod="kube-system/kindnet-kcrzx"
	I0819 17:42:29.626879       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kcrzx" node="ha-431000-m04"
	E0819 17:42:29.628487       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2fn5w\": pod kube-proxy-2fn5w is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2fn5w" node="ha-431000-m04"
	E0819 17:42:29.628792       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bca1b722-fe85-4f4b-a536-8228357812a4(kube-system/kube-proxy-2fn5w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2fn5w"
	E0819 17:42:29.628962       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2fn5w\": pod kube-proxy-2fn5w is already assigned to node \"ha-431000-m04\"" pod="kube-system/kube-proxy-2fn5w"
	I0819 17:42:29.629175       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2fn5w" node="ha-431000-m04"
	E0819 17:42:52.562727       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wfcpq\": pod busybox-7dff88458-wfcpq is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wfcpq" node="ha-431000-m04"
	E0819 17:42:52.562826       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c7d1dd4a-aba7-4c8f-be2e-0dc5cdb4faf7(default/busybox-7dff88458-wfcpq) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wfcpq"
	E0819 17:42:52.562855       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wfcpq\": pod busybox-7dff88458-wfcpq is already assigned to node \"ha-431000-m04\"" pod="default/busybox-7dff88458-wfcpq"
	I0819 17:42:52.562878       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wfcpq" node="ha-431000-m04"
	
	
	==> kubelet <==
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:38:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:39:45 ha-431000 kubelet[2153]: E0819 17:39:45.526214    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:39:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:40:45 ha-431000 kubelet[2153]: E0819 17:40:45.529172    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:40:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:41:45 ha-431000 kubelet[2153]: E0819 17:41:45.526920    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:41:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:41:59 ha-431000 kubelet[2153]: E0819 17:41:59.290192    2153 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49834->127.0.0.1:35619: write tcp 127.0.0.1:49834->127.0.0.1:35619: write: broken pipe
	Aug 19 17:42:45 ha-431000 kubelet[2153]: E0819 17:42:45.526621    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:42:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:42:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:42:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:42:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (53.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (3.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status --output json -v=7 --alsologtostderr: exit status 2 (423.937985ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-431000","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-431000-m02","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-431000-m03","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false},{"Name":"ha-431000-m04","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:43:00.529041    6263 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:43:00.529336    6263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:43:00.529341    6263 out.go:358] Setting ErrFile to fd 2...
	I0819 10:43:00.529345    6263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:43:00.529516    6263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:43:00.529694    6263 out.go:352] Setting JSON to true
	I0819 10:43:00.529718    6263 mustload.go:65] Loading cluster: ha-431000
	I0819 10:43:00.529753    6263 notify.go:220] Checking for updates...
	I0819 10:43:00.530050    6263 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:43:00.530068    6263 status.go:255] checking status of ha-431000 ...
	I0819 10:43:00.530432    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.530482    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.539458    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51476
	I0819 10:43:00.539915    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.540358    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.540392    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.540628    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.540735    6263 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:43:00.540828    6263 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:43:00.540892    6263 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:43:00.541917    6263 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:43:00.541935    6263 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:43:00.542169    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.542190    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.550512    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51478
	I0819 10:43:00.550827    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.551187    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.551208    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.551448    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.551563    6263 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:43:00.551650    6263 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:43:00.551904    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.551928    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.564183    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51480
	I0819 10:43:00.564534    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.564832    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.564840    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.565035    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.565138    6263 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:43:00.565268    6263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:43:00.565287    6263 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:43:00.565368    6263 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:43:00.565452    6263 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:43:00.565533    6263 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:43:00.565618    6263 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:43:00.595172    6263 ssh_runner.go:195] Run: systemctl --version
	I0819 10:43:00.599496    6263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:43:00.611757    6263 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:43:00.611779    6263 api_server.go:166] Checking apiserver status ...
	I0819 10:43:00.611834    6263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:43:00.623455    6263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W0819 10:43:00.631625    6263 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:43:00.631674    6263 ssh_runner.go:195] Run: ls
	I0819 10:43:00.634754    6263 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:43:00.637822    6263 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:43:00.637834    6263 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:43:00.637843    6263 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:43:00.637853    6263 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:43:00.638095    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.638115    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.646791    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51484
	I0819 10:43:00.647146    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.647451    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.647464    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.647693    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.647820    6263 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:43:00.647902    6263 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:43:00.648007    6263 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:43:00.649030    6263 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0819 10:43:00.649042    6263 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:43:00.649319    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.649343    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.657970    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51486
	I0819 10:43:00.658291    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.658603    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.658624    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.658922    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.659060    6263 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:43:00.659158    6263 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:43:00.659427    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.659450    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.668037    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51488
	I0819 10:43:00.668488    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.668817    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.668833    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.669052    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.669169    6263 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:43:00.669299    6263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:43:00.669310    6263 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:43:00.669390    6263 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:43:00.669479    6263 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:43:00.669560    6263 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:43:00.669628    6263 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:43:00.702454    6263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:43:00.713647    6263 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:43:00.713661    6263 api_server.go:166] Checking apiserver status ...
	I0819 10:43:00.713699    6263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:43:00.724396    6263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup
	W0819 10:43:00.731838    6263 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1869/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:43:00.731885    6263 ssh_runner.go:195] Run: ls
	I0819 10:43:00.735434    6263 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:43:00.738704    6263 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:43:00.738720    6263 status.go:422] ha-431000-m02 apiserver status = Running (err=<nil>)
	I0819 10:43:00.738727    6263 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:43:00.738744    6263 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:43:00.739007    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.739028    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.747556    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51492
	I0819 10:43:00.747909    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.748249    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.748265    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.748474    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.748586    6263 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:43:00.748672    6263 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:43:00.748740    6263 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:43:00.749742    6263 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:43:00.749754    6263 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:43:00.749992    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.750029    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.758805    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51494
	I0819 10:43:00.759146    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.759474    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.759483    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.759688    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.759783    6263 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:43:00.759870    6263 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:43:00.760110    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.760131    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.768635    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51496
	I0819 10:43:00.768979    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.769298    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.769308    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.769544    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.769672    6263 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:43:00.769806    6263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:43:00.769818    6263 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:43:00.769889    6263 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:43:00.769964    6263 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:43:00.770058    6263 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:43:00.770145    6263 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:43:00.804707    6263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:43:00.815440    6263 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:43:00.815455    6263 api_server.go:166] Checking apiserver status ...
	I0819 10:43:00.815495    6263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:43:00.825104    6263 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:43:00.825115    6263 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:43:00.825124    6263 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:43:00.825139    6263 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:43:00.825392    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.825416    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.834122    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51499
	I0819 10:43:00.834458    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.834811    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.834826    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.835020    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.835134    6263 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:43:00.835216    6263 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:43:00.835282    6263 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:43:00.836312    6263 status.go:330] ha-431000-m04 host status = "Running" (err=<nil>)
	I0819 10:43:00.836322    6263 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:43:00.836573    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.836598    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.845331    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51501
	I0819 10:43:00.845670    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.845998    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.846009    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.846237    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.846348    6263 main.go:141] libmachine: (ha-431000-m04) Calling .GetIP
	I0819 10:43:00.846438    6263 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:43:00.846686    6263 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:00.846707    6263 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:00.855282    6263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51503
	I0819 10:43:00.855627    6263 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:00.855979    6263 main.go:141] libmachine: Using API Version  1
	I0819 10:43:00.855996    6263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:00.856213    6263 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:00.856332    6263 main.go:141] libmachine: (ha-431000-m04) Calling .DriverName
	I0819 10:43:00.856474    6263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:43:00.856486    6263 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHHostname
	I0819 10:43:00.856560    6263 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHPort
	I0819 10:43:00.856658    6263 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHKeyPath
	I0819 10:43:00.856748    6263 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHUsername
	I0819 10:43:00.856828    6263 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m04/id_rsa Username:docker}
	I0819 10:43:00.885325    6263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:43:00.896329    6263 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-431000 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (2.279023333s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT |                     |
	|         | busybox-7dff88458-wfcpq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-431000 -v=7                | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:27:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:27:09.441458    4789 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:27:09.441716    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441721    4789 out.go:358] Setting ErrFile to fd 2...
	I0819 10:27:09.441725    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441914    4789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:27:09.443405    4789 out.go:352] Setting JSON to false
	I0819 10:27:09.468451    4789 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3399,"bootTime":1724085030,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:27:09.468547    4789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:27:09.554597    4789 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:27:09.577770    4789 notify.go:220] Checking for updates...
	I0819 10:27:09.609734    4789 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:27:09.676944    4789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:09.699980    4789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:27:09.722951    4789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:27:09.744804    4789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.765726    4789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:27:09.787204    4789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:27:09.817679    4789 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 10:27:09.859821    4789 start.go:297] selected driver: hyperkit
	I0819 10:27:09.859849    4789 start.go:901] validating driver "hyperkit" against <nil>
	I0819 10:27:09.859893    4789 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:27:09.864287    4789 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.864395    4789 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:27:09.872759    4789 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:27:09.876743    4789 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.876768    4789 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:27:09.876803    4789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:27:09.877011    4789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:27:09.877072    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:09.877082    4789 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 10:27:09.877094    4789 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:27:09.877164    4789 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:09.877251    4789 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.919755    4789 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:27:09.940604    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:09.940675    4789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:27:09.940720    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:09.940918    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:09.940931    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:09.941271    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:09.941299    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json: {Name:mkf9dcbb24d8b9fbe62d81f81a7a87fec457d2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:09.941835    4789 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:09.941963    4789 start.go:364] duration metric: took 95.166µs to acquireMachinesLock for "ha-431000"
	I0819 10:27:09.941997    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:09.942082    4789 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 10:27:09.963791    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:09.964075    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.964148    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:09.974068    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51111
	I0819 10:27:09.974512    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:09.974919    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:09.974932    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:09.975172    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:09.975283    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:09.975374    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:09.975471    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:09.975492    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:09.975527    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:09.975578    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975594    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975657    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:09.975695    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975707    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975719    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:09.975729    4789 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0819 10:27:09.975800    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.975970    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:09.976388    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:09.976397    4789 main.go:141] libmachine: (ha-431000) Calling .Create
	I0819 10:27:09.976462    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.976580    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:09.976459    4799 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.976633    4789 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:10.160305    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.160220    4799 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa...
	I0819 10:27:10.258779    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.258678    4799 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0819 10:27:10.258792    4789 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0819 10:27:10.258800    4789 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0819 10:27:10.259681    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.259588    4799 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000 ...
	I0819 10:27:10.634434    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.634476    4789 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:27:10.634529    4789 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:27:10.744945    4789 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:27:10.744966    4789 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:10.744993    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745030    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745065    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:10.745094    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:10.745118    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:10.748020    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Pid is 4802
	I0819 10:27:10.748404    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:27:10.748413    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.748494    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:10.749357    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:10.749398    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:10.749412    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:10.749423    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:10.749431    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:10.755634    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:10.806699    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:10.807300    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:10.807314    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:10.807322    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:10.807335    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.184562    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:11.184575    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:11.299194    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:11.299213    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:11.299228    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:11.299236    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.300075    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:11.300086    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:12.750038    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0819 10:27:12.750054    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:12.750189    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:12.750969    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:12.751019    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:12.751030    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:12.751039    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:12.751052    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:14.752158    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0819 10:27:14.752174    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:14.752264    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:14.753040    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:14.753090    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:14.753102    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:14.753111    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:14.753117    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.754325    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0819 10:27:16.754340    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:16.754402    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:16.755326    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:16.755347    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:16.755354    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:16.755373    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:16.755390    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.856153    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:27:16.856252    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:27:16.856262    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:27:16.880804    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:27:18.757489    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0819 10:27:18.757504    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:18.757601    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:18.758394    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:18.758435    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:18.758449    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:18.758481    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:18.758495    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:20.758927    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0819 10:27:20.758946    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.759035    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.759848    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:20.759873    4789 main.go:141] libmachine: (ha-431000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:20.759888    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:20.759901    4789 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:27:20.759913    4789 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:27:20.759952    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:20.760523    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760634    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760741    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:27:20.760753    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:20.760839    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.760885    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.761678    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:27:20.761690    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:27:20.761696    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:27:20.761702    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:20.761795    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:20.761883    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.761969    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.762060    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:20.762168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:20.762361    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:20.762369    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:27:21.818394    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.818406    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:27:21.818419    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.818554    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.818654    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818747    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818841    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.818981    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.819131    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.819139    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:27:21.870784    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:27:21.870826    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:27:21.870831    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:27:21.870837    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.870976    4789 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:27:21.870986    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.871077    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.871169    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.871272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871352    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871452    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.871577    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.871711    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.871719    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:27:21.937676    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:27:21.937694    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.937826    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.937927    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938017    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938112    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.938245    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.938391    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.938402    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:27:21.996654    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.996676    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:27:21.996692    4789 buildroot.go:174] setting up certificates
	I0819 10:27:21.996701    4789 provision.go:84] configureAuth start
	I0819 10:27:21.996714    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.996873    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:21.996990    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.997094    4789 provision.go:143] copyHostCerts
	I0819 10:27:21.997133    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997201    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:27:21.997209    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997337    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:27:21.997534    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997567    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:27:21.997572    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997714    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:27:21.997882    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.997926    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:27:21.997941    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.998049    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:27:21.998203    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:27:22.044837    4789 provision.go:177] copyRemoteCerts
	I0819 10:27:22.044896    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:27:22.044908    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.045021    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.045107    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.045191    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.045288    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:22.078701    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:27:22.078779    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:27:22.098027    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:27:22.098092    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 10:27:22.117169    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:27:22.117235    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:27:22.137411    4789 provision.go:87] duration metric: took 140.68689ms to configureAuth
	I0819 10:27:22.137424    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:27:22.137558    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:22.137574    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:22.137700    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.137783    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.137859    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.137942    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.138028    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.138134    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.138266    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.138274    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:27:22.191384    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:27:22.191397    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:27:22.191469    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:27:22.191481    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.191636    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.191724    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191834    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191924    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.192051    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.192193    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.192236    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:27:22.256138    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:27:22.256165    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.256301    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.256391    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256475    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256578    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.256695    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.256839    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.256851    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:27:23.816844    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:27:23.816860    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:27:23.816871    4789 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0819 10:27:23.817008    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:27:23.817016    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:27:23.817020    4789 client.go:171] duration metric: took 13.841219093s to LocalClient.Create
	I0819 10:27:23.817036    4789 start.go:167] duration metric: took 13.84126124s to libmachine.API.Create "ha-431000"
	I0819 10:27:23.817044    4789 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:27:23.817051    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:27:23.817063    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.817219    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:27:23.817232    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.817321    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.817402    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.817497    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.817595    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.852993    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:27:23.857771    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:27:23.857792    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:27:23.857909    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:27:23.858094    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:27:23.858100    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:27:23.858323    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:27:23.868639    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:23.894485    4789 start.go:296] duration metric: took 77.430316ms for postStartSetup
	I0819 10:27:23.894509    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:23.895099    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.895256    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:23.895585    4789 start.go:128] duration metric: took 13.953185373s to createHost
	I0819 10:27:23.895598    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.895691    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.895790    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895879    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895966    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.896069    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:23.896228    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:23.896236    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:27:23.956133    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088443.744394113
	
	I0819 10:27:23.956145    4789 fix.go:216] guest clock: 1724088443.744394113
	I0819 10:27:23.956151    4789 fix.go:229] Guest: 2024-08-19 10:27:23.744394113 -0700 PDT Remote: 2024-08-19 10:27:23.895593 -0700 PDT m=+14.491162031 (delta=-151.198887ms)
	I0819 10:27:23.956169    4789 fix.go:200] guest clock delta is within tolerance: -151.198887ms
	I0819 10:27:23.956173    4789 start.go:83] releasing machines lock for "ha-431000", held for 14.013893151s
	I0819 10:27:23.956192    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956322    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.956416    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956749    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956860    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956951    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:27:23.956980    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957023    4789 ssh_runner.go:195] Run: cat /version.json
	I0819 10:27:23.957036    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957073    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957109    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957170    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957184    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957292    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957350    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.957384    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:24.032926    4789 ssh_runner.go:195] Run: systemctl --version
	I0819 10:27:24.037723    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:27:24.041939    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:27:24.041985    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:27:24.055424    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:27:24.055435    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.055529    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.070257    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:27:24.079169    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:27:24.088264    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.088319    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:27:24.097172    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.105902    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:27:24.114585    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.123406    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:27:24.132626    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:27:24.141378    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:27:24.150490    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:27:24.158980    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:27:24.167068    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:27:24.175030    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.269460    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:27:24.289328    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.289405    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:27:24.304907    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.317291    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:27:24.330289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.340851    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.351456    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:27:24.376914    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.387402    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.402522    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:27:24.405426    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:27:24.412799    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:27:24.426019    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:27:24.528550    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:27:24.636829    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.636893    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:27:24.652027    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.753641    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:27.037286    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.283575266s)
	I0819 10:27:27.037346    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:27:27.047775    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:27:27.062961    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.074027    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:27:27.172330    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:27:27.284593    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.395779    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:27:27.409552    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.420868    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.532356    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:27:27.591558    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:27:27.591636    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:27:27.595967    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:27:27.596013    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:27:27.599275    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:27:27.625101    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:27:27.625173    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.642636    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.693299    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:27:27.693355    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:27.693783    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:27:27.698129    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:27.708916    4789 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:27:27.708982    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:27.709038    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:27.721971    4789 docker.go:685] Got preloaded images: 
	I0819 10:27:27.721984    4789 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0819 10:27:27.722034    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:27.730353    4789 ssh_runner.go:195] Run: which lz4
	I0819 10:27:27.733218    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 10:27:27.733323    4789 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 10:27:27.736425    4789 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 10:27:27.736445    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0819 10:27:28.750864    4789 docker.go:649] duration metric: took 1.017557348s to copy over tarball
	I0819 10:27:28.750956    4789 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 10:27:31.074672    4789 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.323648699s)
	I0819 10:27:31.074688    4789 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 10:27:31.100633    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:31.109680    4789 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0819 10:27:31.123335    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:31.234501    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:33.578614    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.344043512s)
	I0819 10:27:33.578701    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:33.592021    4789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 10:27:33.592040    4789 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:27:33.592048    4789 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:27:33.592132    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:27:33.592198    4789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:27:33.629283    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:33.629295    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:33.629309    4789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:27:33.629329    4789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:27:33.629424    4789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:27:33.629439    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:27:33.629491    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:27:33.642904    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:27:33.642969    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:27:33.643018    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:27:33.652008    4789 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:27:33.652070    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:27:33.660066    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:27:33.673571    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:27:33.686700    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:27:33.700085    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0819 10:27:33.713804    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:27:33.716661    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:33.726684    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:33.822205    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:27:33.836833    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:27:33.836844    4789 certs.go:194] generating shared ca certs ...
	I0819 10:27:33.836855    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.837051    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:27:33.837132    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:27:33.837142    4789 certs.go:256] generating profile certs ...
	I0819 10:27:33.837189    4789 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:27:33.837203    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0819 10:27:33.888319    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt ...
	I0819 10:27:33.888333    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt: {Name:mk2ecc34873277fbe11bf267ec0d97684e18e84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888666    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key ...
	I0819 10:27:33.888675    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key: {Name:mk51abee214c838f4621902241303fe73ba93aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888900    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e
	I0819 10:27:33.888915    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0819 10:27:34.060027    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e ...
	I0819 10:27:34.060046    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e: {Name:mk108eb9cf88ab2aae15883e4a3724751adb3118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060347    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e ...
	I0819 10:27:34.060356    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e: {Name:mk8fae11cce9c9a45d3e151953d1ee9ab2cc82d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060557    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:27:34.060759    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:27:34.060929    4789 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:27:34.060943    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0819 10:27:34.243675    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0819 10:27:34.243690    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mkeb1eac7ee8b3901067565b7ff883710f2d1088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244061    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key ...
	I0819 10:27:34.244069    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key: {Name:mkc1afcd7a6a9a572716155e33c32e7def81650b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244312    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:27:34.244340    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:27:34.244378    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:27:34.244398    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:27:34.244416    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:27:34.244448    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:27:34.244486    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:27:34.244521    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:27:34.244615    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:27:34.244666    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:27:34.244675    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:27:34.244748    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:27:34.244776    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:27:34.244831    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:27:34.244909    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:34.244942    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.244990    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.245007    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.245522    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:27:34.267677    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:27:34.287348    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:27:34.309971    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:27:34.330910    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 10:27:34.350036    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:27:34.370663    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:27:34.390457    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:27:34.410226    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:27:34.431025    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:27:34.451232    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:27:34.471133    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:27:34.487758    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:27:34.493769    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:27:34.506308    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511941    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511996    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.519851    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:27:34.531120    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:27:34.540803    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544302    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544341    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.548724    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:27:34.558817    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:27:34.568088    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571692    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571731    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.575999    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:27:34.585057    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:27:34.588207    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:27:34.588251    4789 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:34.588345    4789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:27:34.601241    4789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:27:34.609838    4789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:27:34.618794    4789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:27:34.627200    4789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:27:34.627208    4789 kubeadm.go:157] found existing configuration files:
	
	I0819 10:27:34.627243    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:27:34.635162    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:27:34.635198    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:27:34.643336    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:27:34.651247    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:27:34.651280    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:27:34.659346    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.667240    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:27:34.667281    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.675386    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:27:34.684053    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:27:34.684105    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:27:34.692357    4789 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 10:27:34.751991    4789 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:27:34.752160    4789 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:27:34.833970    4789 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:27:34.834062    4789 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:27:34.834153    4789 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:27:34.842513    4789 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:27:34.863067    4789 out.go:235]   - Generating certificates and keys ...
	I0819 10:27:34.863126    4789 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:27:34.863179    4789 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:27:35.003012    4789 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:27:35.766829    4789 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:27:35.976153    4789 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:27:36.134850    4789 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:27:36.228947    4789 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:27:36.229166    4789 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.375842    4789 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:27:36.375934    4789 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.597289    4789 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:27:36.907219    4789 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:27:37.426404    4789 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:27:37.426585    4789 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:27:37.566387    4789 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:27:38.000620    4789 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:27:38.121335    4789 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:27:38.179042    4789 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:27:38.231270    4789 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:27:38.231752    4789 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:27:38.233818    4789 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:27:38.255454    4789 out.go:235]   - Booting up control plane ...
	I0819 10:27:38.255535    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:27:38.255605    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:27:38.255655    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:27:38.255734    4789 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:27:38.255809    4789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:27:38.255842    4789 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:27:38.364951    4789 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:27:38.365069    4789 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:27:39.366309    4789 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001984632s
	I0819 10:27:39.366388    4789 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:27:45.029099    4789 kubeadm.go:310] [api-check] The API server is healthy after 5.666724975s
	I0819 10:27:45.039440    4789 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:27:45.046481    4789 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:27:45.059797    4789 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:27:45.059959    4789 kubeadm.go:310] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:27:45.067482    4789 kubeadm.go:310] [bootstrap-token] Using token: rrr6yu.ivgebthw63l7ehzv
	I0819 10:27:45.106820    4789 out.go:235]   - Configuring RBAC rules ...
	I0819 10:27:45.107004    4789 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:27:45.110638    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:27:45.151902    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:27:45.154406    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:27:45.156223    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:27:45.158190    4789 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:27:45.434935    4789 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:27:45.846068    4789 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:27:46.434136    4789 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:27:46.434675    4789 kubeadm.go:310] 
	I0819 10:27:46.434724    4789 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:27:46.434728    4789 kubeadm.go:310] 
	I0819 10:27:46.434798    4789 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:27:46.434808    4789 kubeadm.go:310] 
	I0819 10:27:46.434829    4789 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:27:46.434881    4789 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:27:46.434925    4789 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:27:46.434930    4789 kubeadm.go:310] 
	I0819 10:27:46.434974    4789 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:27:46.434984    4789 kubeadm.go:310] 
	I0819 10:27:46.435035    4789 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:27:46.435041    4789 kubeadm.go:310] 
	I0819 10:27:46.435080    4789 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:27:46.435139    4789 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:27:46.435197    4789 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:27:46.435204    4789 kubeadm.go:310] 
	I0819 10:27:46.435268    4789 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:27:46.435333    4789 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:27:46.435337    4789 kubeadm.go:310] 
	I0819 10:27:46.435410    4789 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435498    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd \
	I0819 10:27:46.435515    4789 kubeadm.go:310] 	--control-plane 
	I0819 10:27:46.435520    4789 kubeadm.go:310] 
	I0819 10:27:46.435589    4789 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:27:46.435594    4789 kubeadm.go:310] 
	I0819 10:27:46.435664    4789 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435746    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd 
	I0819 10:27:46.435997    4789 kubeadm.go:310] W0819 17:27:34.545490    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436229    4789 kubeadm.go:310] W0819 17:27:34.546600    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436316    4789 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:27:46.436331    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:46.436337    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:46.458203    4789 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 10:27:46.517773    4789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 10:27:46.523858    4789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 10:27:46.523872    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 10:27:46.539513    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 10:27:46.759807    4789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:27:46.759878    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:46.759883    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0819 10:27:46.777623    4789 ops.go:34] apiserver oom_adj: -16
	I0819 10:27:46.926523    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.427175    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.927281    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.428033    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.926686    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.426608    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.926666    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:50.010199    4789 kubeadm.go:1113] duration metric: took 3.25030545s to wait for elevateKubeSystemPrivileges
	I0819 10:27:50.010216    4789 kubeadm.go:394] duration metric: took 15.42163041s to StartCluster
	I0819 10:27:50.010227    4789 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.010325    4789 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.010762    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.011021    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:27:50.011033    4789 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.011050    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:27:50.011076    4789 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:27:50.011116    4789 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0819 10:27:50.011120    4789 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0819 10:27:50.011148    4789 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0819 10:27:50.011152    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.011155    4789 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0819 10:27:50.011186    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.011415    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011420    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011430    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.011431    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.020667    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51134
	I0819 10:27:50.021171    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021230    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51136
	I0819 10:27:50.021523    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021533    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.021634    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021753    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.021940    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.022115    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.022146    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.022229    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.022806    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.022988    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.023051    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.024924    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.025156    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:27:50.025529    4789 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:27:50.025699    4789 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0819 10:27:50.025720    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.025937    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.025963    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.031229    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51138
	I0819 10:27:50.031604    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.031942    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.031953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.032154    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.032270    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.032358    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.032435    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.033436    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.034958    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51140
	I0819 10:27:50.035269    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.035586    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.035596    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.035796    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.036148    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.036165    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.044937    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51142
	I0819 10:27:50.045312    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.045667    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.045680    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.045893    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.045996    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.046077    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.046151    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.047102    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.047225    4789 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.047234    4789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:27:50.047243    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.047325    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.047417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.047495    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.047571    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.056055    4789 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:27:50.076134    4789 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.076146    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:27:50.076163    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.076310    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.076417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.076556    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.076664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.113554    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:27:50.127003    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.262022    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.488277    4789 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0819 10:27:50.488318    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488327    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488534    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488547    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488556    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488563    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488564    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488681    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488704    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488718    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488767    4789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:27:50.488780    4789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:27:50.488862    4789 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 10:27:50.488867    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.488877    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.488882    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495057    4789 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:27:50.495477    4789 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 10:27:50.495484    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.495490    4789 round_trippers.go:473]     Content-Type: application/json
	I0819 10:27:50.495494    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.498504    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:27:50.498632    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.498641    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.498797    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.498806    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.498814    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649595    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649607    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.649833    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.649843    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649848    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.649874    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649893    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.650019    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.650028    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.650044    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.673040    4789 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 10:27:50.709732    4789 addons.go:510] duration metric: took 698.654107ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 10:27:50.709774    4789 start.go:246] waiting for cluster config update ...
	I0819 10:27:50.709799    4789 start.go:255] writing updated cluster config ...
	I0819 10:27:50.746763    4789 out.go:201] 
	I0819 10:27:50.768467    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.768565    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.790908    4789 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:27:50.832651    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:50.832673    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:50.832790    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:50.832801    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:50.832852    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.833261    4789 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:50.833314    4789 start.go:364] duration metric: took 41.162µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:27:50.833329    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.833382    4789 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0819 10:27:50.854688    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:50.854833    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.854870    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.864309    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51147
	I0819 10:27:50.864640    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.864951    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.864963    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.865175    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.865294    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:27:50.865374    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:27:50.865472    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:50.865485    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:50.865515    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:50.865553    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865565    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865607    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:50.865634    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865649    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865666    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:50.865676    4789 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0819 10:27:50.865754    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.865776    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:27:50.891966    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:50.891987    4789 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0819 10:27:50.892145    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.892330    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:50.892137    4845 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:50.892421    4789 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:51.078705    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.078630    4845 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa...
	I0819 10:27:51.171843    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.171751    4845 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0819 10:27:51.171860    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0819 10:27:51.171868    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0819 10:27:51.172685    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.172591    4845 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02 ...
	I0819 10:27:51.544884    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.544910    4789 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:27:51.544922    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:27:51.571631    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:27:51.571653    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:51.571680    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571706    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571739    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:51.571767    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:51.571780    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:51.574668    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Pid is 4850
	I0819 10:27:51.575734    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:27:51.575757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.575783    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:51.576702    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:51.576759    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:51.576778    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:51.576816    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:51.576830    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:51.576844    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:51.582262    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:51.590515    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:51.591362    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:51.591388    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:51.591397    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:51.591407    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:51.978930    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:51.978947    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:52.094059    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:52.094091    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:52.094127    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:52.094142    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:52.094869    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:52.094879    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:53.577521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0819 10:27:53.577541    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:53.577636    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:53.578446    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:53.578461    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:53.578472    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:53.578481    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:53.578489    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:53.578507    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:55.579485    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0819 10:27:55.579501    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:55.579576    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:55.580358    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:55.580387    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:55.580414    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:55.580426    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:55.580434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:55.580442    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.581588    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0819 10:27:57.581603    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:57.581681    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:57.582486    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:57.582510    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:57.582521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:57.582530    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:57.582540    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:57.582548    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.680321    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:27:57.680434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:27:57.680445    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:27:57.704982    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:27:59.583757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0819 10:27:59.583772    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:59.583842    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:59.584652    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:59.584696    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:59.584710    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:59.584720    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:59.584729    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:59.584737    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:28:01.585137    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0819 10:28:01.585154    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.585235    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.585996    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:28:01.586042    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:28:01.586055    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:28:01.586080    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:28:01.586086    4789 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:28:01.586098    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:01.586694    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586794    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586889    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:28:01.586896    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:28:01.586980    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.587029    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.587790    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:28:01.587796    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:28:01.587800    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:28:01.587804    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:01.587881    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:01.587956    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588060    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588138    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:01.588256    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:01.588435    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:01.588443    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:28:02.645180    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.645193    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:28:02.645198    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.645326    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.645422    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645501    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645583    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.645718    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.645869    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.645877    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:28:02.700961    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:28:02.700992    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:28:02.700998    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:28:02.701003    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701132    4789 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:28:02.701143    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701237    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.701327    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.701424    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701502    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701588    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.701720    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.701855    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.701864    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:28:02.773500    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:28:02.773515    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.773649    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.773737    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773840    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773945    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.774071    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.774226    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.774237    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:28:02.838956    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.838971    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:28:02.838984    4789 buildroot.go:174] setting up certificates
	I0819 10:28:02.838992    4789 provision.go:84] configureAuth start
	I0819 10:28:02.838998    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.839135    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:02.839223    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.839322    4789 provision.go:143] copyHostCerts
	I0819 10:28:02.839347    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839393    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:28:02.839399    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839532    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:28:02.839738    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839769    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:28:02.839774    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839845    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:28:02.839992    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840021    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:28:02.840025    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840090    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:28:02.840244    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:28:02.878856    4789 provision.go:177] copyRemoteCerts
	I0819 10:28:02.878899    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:28:02.878912    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.879041    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.879132    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.879231    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.879330    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:02.914748    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:28:02.914819    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:28:02.934608    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:28:02.934673    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:28:02.954833    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:28:02.954900    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:28:02.974652    4789 provision.go:87] duration metric: took 135.649275ms to configureAuth
	I0819 10:28:02.974666    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:28:02.974809    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:02.974823    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:02.974958    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.975063    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.975147    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975219    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975328    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.975454    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.975601    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.975609    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:28:03.033628    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:28:03.033639    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:28:03.033715    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:28:03.033730    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.033861    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.033950    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034053    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034140    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.034264    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.034412    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.034459    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:28:03.102644    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:28:03.102663    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.102811    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.102898    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.102999    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.103120    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.103244    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.103390    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.103404    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:28:04.637367    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:28:04.637381    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:28:04.637388    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0819 10:28:04.637524    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:28:04.637530    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:28:04.637534    4789 client.go:171] duration metric: took 13.771742286s to LocalClient.Create
	I0819 10:28:04.637544    4789 start.go:167] duration metric: took 13.771771513s to libmachine.API.Create "ha-431000"
	I0819 10:28:04.637550    4789 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:28:04.637557    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:28:04.637566    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.637712    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:28:04.637723    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.637834    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.637926    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.638026    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.638127    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.678475    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:28:04.682965    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:28:04.682980    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:28:04.683079    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:28:04.683246    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:28:04.683253    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:28:04.683434    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:28:04.695086    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:04.723279    4789 start.go:296] duration metric: took 85.720185ms for postStartSetup
	I0819 10:28:04.723311    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:04.723943    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.724123    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:28:04.724446    4789 start.go:128] duration metric: took 13.890752069s to createHost
	I0819 10:28:04.724460    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.724558    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.724679    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724786    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724871    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.724979    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:04.725097    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:04.725103    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:28:04.784682    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088484.852271103
	
	I0819 10:28:04.784694    4789 fix.go:216] guest clock: 1724088484.852271103
	I0819 10:28:04.784698    4789 fix.go:229] Guest: 2024-08-19 10:28:04.852271103 -0700 PDT Remote: 2024-08-19 10:28:04.724454 -0700 PDT m=+55.319126445 (delta=127.817103ms)
	I0819 10:28:04.784725    4789 fix.go:200] guest clock delta is within tolerance: 127.817103ms
	I0819 10:28:04.784731    4789 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.951104834s
	I0819 10:28:04.784750    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.784884    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.807240    4789 out.go:177] * Found network options:
	I0819 10:28:04.829600    4789 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:28:04.851548    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.851607    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852495    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852747    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852876    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:28:04.852915    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:28:04.852962    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.853080    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:28:04.853100    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.853127    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853372    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853394    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853596    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853633    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853742    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853804    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.853880    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:28:04.886788    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:28:04.886847    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:28:04.931189    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:28:04.931209    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:04.931315    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:04.947443    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:28:04.955693    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:28:04.964155    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:28:04.964197    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:28:04.972493    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.980548    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:28:04.988709    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.996856    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:28:05.005271    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:28:05.013575    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:28:05.021801    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:28:05.030285    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:28:05.037842    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:28:05.045332    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.140730    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:28:05.159555    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:05.159625    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:28:05.177222    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.189624    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:28:05.203743    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.214606    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.224836    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:28:05.249649    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.261132    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:05.276191    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:28:05.279129    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:28:05.287175    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:28:05.300748    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:28:05.396444    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:28:05.505778    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:28:05.505805    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:28:05.520914    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.616215    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:28:07.911303    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.295016426s)
	I0819 10:28:07.911366    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:28:07.923467    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:28:07.938312    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:07.949283    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:28:08.046922    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:28:08.152880    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.256594    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:28:08.271339    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:08.283089    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.384798    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:28:08.441813    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:28:08.441881    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:28:08.446421    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:28:08.446473    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:28:08.449807    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:28:08.479621    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:28:08.479690    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.496571    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.537488    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:28:08.579078    4789 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:28:08.603340    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:08.603786    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:28:08.608372    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:08.618166    4789 mustload.go:65] Loading cluster: ha-431000
	I0819 10:28:08.618314    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:08.618533    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.618549    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.627122    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51170
	I0819 10:28:08.627459    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.627845    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.627857    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.628097    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.628239    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:28:08.628342    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:08.628430    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:28:08.629353    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:08.629592    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.629608    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.638041    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51172
	I0819 10:28:08.638388    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.638753    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.638770    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.638992    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.639108    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:08.639209    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:28:08.639216    4789 certs.go:194] generating shared ca certs ...
	I0819 10:28:08.639225    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.639357    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:28:08.639425    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:28:08.639434    4789 certs.go:256] generating profile certs ...
	I0819 10:28:08.639538    4789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:28:08.639562    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788
	I0819 10:28:08.639575    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0819 10:28:08.693749    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 ...
	I0819 10:28:08.693766    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788: {Name:mkade16cb35e521e9e55fc42d7cb129c8b94b782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694149    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 ...
	I0819 10:28:08.694160    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788: {Name:mkeae0a28d48da45f84299952289f15db5f944f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694378    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:28:08.694703    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:28:08.694954    4789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:28:08.694964    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:28:08.694987    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:28:08.695006    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:28:08.695024    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:28:08.695042    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:28:08.695060    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:28:08.695078    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:28:08.695096    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:28:08.695175    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:28:08.695213    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:28:08.695228    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:28:08.695261    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:28:08.695290    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:28:08.695321    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:28:08.695400    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:08.695438    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:28:08.695462    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:28:08.695482    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:08.695511    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:08.695664    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:08.695745    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:08.695845    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:08.695925    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:08.729193    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:28:08.736181    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:28:08.748665    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:28:08.751826    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:28:08.773481    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:28:08.777252    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:28:08.787546    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:28:08.791015    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:28:08.800105    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:28:08.803218    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:28:08.812240    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:28:08.815351    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:28:08.824083    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:28:08.844052    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:28:08.864107    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:28:08.884612    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:28:08.904284    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 10:28:08.924397    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:28:08.944026    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:28:08.964689    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:28:08.984934    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:28:09.004413    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:28:09.024043    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:28:09.043924    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:28:09.058066    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:28:09.071585    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:28:09.085080    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:28:09.098536    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:28:09.112048    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:28:09.125242    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:28:09.139717    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:28:09.144032    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:28:09.152602    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.155967    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.156009    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.160192    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:28:09.168568    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:28:09.176997    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180533    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180568    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.184799    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:28:09.193356    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:28:09.201811    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205453    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205494    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.209760    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:28:09.218392    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:28:09.222392    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:28:09.222437    4789 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:28:09.222498    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:28:09.222516    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:28:09.222559    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:28:09.234408    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:28:09.234452    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:28:09.234506    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.242939    4789 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:28:09.242994    4789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0819 10:28:09.251336    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 10:28:11.797289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:28:11.809069    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.809192    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.812267    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:28:11.812291    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:28:12.469259    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.469340    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.472845    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:28:12.472869    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:28:13.348737    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.348820    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.352429    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:28:13.352449    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:28:13.542994    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:28:13.550937    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:28:13.564187    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:28:13.577654    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:28:13.591433    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:28:13.594347    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:13.604347    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:13.710422    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:13.730131    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:13.730407    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:13.730448    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:13.739474    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51199
	I0819 10:28:13.739816    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:13.740174    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:13.740190    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:13.740438    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:13.740564    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:13.740661    4789 start.go:317] joinCluster: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:28:13.740750    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 10:28:13.740767    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:13.740857    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:13.740939    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:13.741027    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:13.741101    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:13.815525    4789 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:13.815563    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0819 10:28:41.108330    4789 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (27.292143754s)
	I0819 10:28:41.108351    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 10:28:41.504714    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000-m02 minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=false
	I0819 10:28:41.585348    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-431000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 10:28:41.693283    4789 start.go:319] duration metric: took 27.951997328s to joinCluster
	I0819 10:28:41.693326    4789 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:41.693537    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:41.715528    4789 out.go:177] * Verifying Kubernetes components...
	I0819 10:28:41.790354    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:41.995139    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:42.017369    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:28:42.017608    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:28:42.017650    4789 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:28:42.017827    4789 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:42.017919    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.017925    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.017930    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.017935    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.025432    4789 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 10:28:42.518902    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.518917    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.518923    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.518927    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.521742    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:43.018396    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.018411    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.018417    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.018421    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.021454    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:43.518031    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.518083    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.518106    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.518116    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.522999    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:44.018193    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.018219    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.018231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.018237    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.021854    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:44.022387    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:44.518152    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.518189    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.518196    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.518199    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.520027    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.019772    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.019792    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.019799    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.019803    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.021628    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.518039    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.518053    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.518059    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.518064    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.520113    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:46.018198    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.018232    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.018239    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.018243    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.020136    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.518474    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.518490    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.518496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.518499    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.520505    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.520916    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:47.019124    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.019150    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.019162    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.022729    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:47.518316    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.518341    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.518351    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.518356    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.520471    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:48.019594    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.019620    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.019630    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.019636    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.023447    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:48.518492    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.518526    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.518583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.518593    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.523421    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:48.523787    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:49.019217    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.019242    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.019254    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.019260    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.022862    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:49.520299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.520324    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.520337    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.520342    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.523532    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.019383    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.019412    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.019424    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.019430    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.022847    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.519489    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.519503    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.519511    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.519515    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.522131    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:51.019130    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.019153    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.019163    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.022497    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:51.022894    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:51.518391    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.518448    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.518465    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.518476    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.521848    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.019014    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.019045    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.019103    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.019117    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.022339    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.519630    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.519644    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.519651    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.519655    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.522019    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.018435    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.018460    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.018472    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.018480    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.021850    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:53.518299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.518340    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.518349    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.518355    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.520795    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.521268    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:54.020380    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.020406    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.020418    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.020423    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.024178    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:54.519346    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.519364    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.519383    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.519387    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.521155    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:55.020400    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.020425    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.020437    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.020444    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.024326    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:55.519229    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.519245    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.519264    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.519268    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.521435    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:55.521852    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:56.019678    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.019703    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.019714    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.019719    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.023317    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:56.518539    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.518563    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.518576    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.518581    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.521781    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.020424    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.020449    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.020460    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.020465    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.024114    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.519399    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.519428    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.519468    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.519475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.522788    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.523223    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:58.018734    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.018759    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.018770    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.018777    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.022242    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.518348    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.518359    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.518371    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.518375    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.522907    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.523168    4789 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:28:58.523182    4789 node_ready.go:38] duration metric: took 16.504973252s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:58.523189    4789 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:28:58.523237    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:28:58.523243    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.523249    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.523253    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.528083    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.532699    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.532761    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:28:58.532768    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.532774    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.532776    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.535978    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.536344    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.536351    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.536358    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.536361    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.538061    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.538368    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.538377    4789 pod_ready.go:82] duration metric: took 5.660556ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538383    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538413    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:28:58.538417    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.538423    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.538428    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.540013    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.540457    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.540465    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.540471    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.540475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.542120    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.542393    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.542400    4789 pod_ready.go:82] duration metric: took 4.011453ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542406    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542439    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:28:58.542444    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.542449    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.542454    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.543986    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.544340    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.544347    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.544353    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.544356    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.545868    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.546173    4789 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.546181    4789 pod_ready.go:82] duration metric: took 3.769725ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546187    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546221    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:28:58.546226    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.546231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.546234    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.547638    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.548110    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.548118    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.548123    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.548127    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.549514    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.549853    4789 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.549860    4789 pod_ready.go:82] duration metric: took 3.668598ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.549868    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.718822    4789 request.go:632] Waited for 168.888912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718861    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718867    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.718872    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.718876    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.721032    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:58.919673    4789 request.go:632] Waited for 198.011193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919731    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919740    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.919750    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.919807    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.923236    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.923670    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.923682    4789 pod_ready.go:82] duration metric: took 373.799986ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.923691    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.119399    4789 request.go:632] Waited for 195.629207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119559    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119572    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.119583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.119589    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.122804    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.318619    4789 request.go:632] Waited for 195.030736ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318674    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318695    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.318702    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.318705    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.320812    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.321165    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.321173    4789 pod_ready.go:82] duration metric: took 397.466691ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.321180    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.520541    4789 request.go:632] Waited for 199.292765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520642    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520652    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.520663    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.520672    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.524463    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.718728    4789 request.go:632] Waited for 192.615056ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718803    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718811    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.718818    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.718823    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.720955    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.721397    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.721407    4789 pod_ready.go:82] duration metric: took 400.213219ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.721415    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.918907    4789 request.go:632] Waited for 197.434904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919004    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919014    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.919024    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.919030    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.922451    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.119192    4789 request.go:632] Waited for 196.220574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119263    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119272    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.119286    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.119297    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.122630    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.122957    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.122968    4789 pod_ready.go:82] duration metric: took 401.538458ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.122977    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.320524    4789 request.go:632] Waited for 197.475989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320660    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320672    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.320681    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.320689    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.323985    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.519403    4789 request.go:632] Waited for 194.628597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519535    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519546    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.519560    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.519568    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.523121    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.523435    4789 pod_ready.go:93] pod "kube-proxy-5h7j2" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.523449    4789 pod_ready.go:82] duration metric: took 400.456993ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.523457    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.718666    4789 request.go:632] Waited for 195.15054ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718742    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718752    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.718786    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.718800    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.721920    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.918782    4789 request.go:632] Waited for 196.40919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918873    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918882    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.918896    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.918906    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.922355    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.922815    4789 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.922824    4789 pod_ready.go:82] duration metric: took 399.351509ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.922830    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.118854    4789 request.go:632] Waited for 195.977175ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118950    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118965    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.118981    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.118987    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.122683    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.318886    4789 request.go:632] Waited for 195.887859ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319029    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319042    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.319053    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.319063    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.322689    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.323187    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.323200    4789 pod_ready.go:82] duration metric: took 400.355182ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.323208    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.518928    4789 request.go:632] Waited for 195.662505ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519043    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519057    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.519070    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.519077    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.522736    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.718819    4789 request.go:632] Waited for 195.65197ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718885    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718891    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.718899    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.718905    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.721246    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:29:01.721682    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.721691    4789 pod_ready.go:82] duration metric: took 398.467113ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.721701    4789 pod_ready.go:39] duration metric: took 3.198431164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:29:01.721718    4789 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:29:01.721774    4789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:29:01.735634    4789 api_server.go:72] duration metric: took 20.041851081s to wait for apiserver process to appear ...
	I0819 10:29:01.735647    4789 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:29:01.735663    4789 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:29:01.738815    4789 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:29:01.738848    4789 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:29:01.738854    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.738860    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.738864    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.739526    4789 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:29:01.739580    4789 api_server.go:141] control plane version: v1.31.0
	I0819 10:29:01.739589    4789 api_server.go:131] duration metric: took 3.937962ms to wait for apiserver health ...
	I0819 10:29:01.739594    4789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:29:01.918638    4789 request.go:632] Waited for 178.995687ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918733    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918745    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.918757    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.918762    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.922864    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:01.926606    4789 system_pods.go:59] 17 kube-system pods found
	I0819 10:29:01.926628    4789 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:01.926633    4789 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:01.926636    4789 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:01.926640    4789 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:01.926642    4789 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:01.926645    4789 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:01.926647    4789 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:01.926650    4789 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:01.926653    4789 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:01.926656    4789 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:01.926659    4789 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:01.926666    4789 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:01.926669    4789 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:01.926672    4789 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:01.926674    4789 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:01.926677    4789 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:01.926680    4789 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:01.926684    4789 system_pods.go:74] duration metric: took 187.080965ms to wait for pod list to return data ...
	I0819 10:29:01.926689    4789 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:29:02.119406    4789 request.go:632] Waited for 192.625822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119507    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119517    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.119528    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.119535    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.123120    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.123283    4789 default_sa.go:45] found service account: "default"
	I0819 10:29:02.123293    4789 default_sa.go:55] duration metric: took 196.595366ms for default service account to be created ...
	I0819 10:29:02.123300    4789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:29:02.319795    4789 request.go:632] Waited for 196.43255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319928    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319939    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.319947    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.319954    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.324586    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:02.328058    4789 system_pods.go:86] 17 kube-system pods found
	I0819 10:29:02.328071    4789 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:02.328075    4789 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:02.328078    4789 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:02.328083    4789 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:02.328086    4789 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:02.328088    4789 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:02.328091    4789 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:02.328093    4789 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:02.328096    4789 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:02.328098    4789 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:02.328101    4789 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:02.328103    4789 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:02.328106    4789 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:02.328109    4789 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:02.328112    4789 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:02.328115    4789 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:02.328117    4789 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:02.328122    4789 system_pods.go:126] duration metric: took 204.813151ms to wait for k8s-apps to be running ...
	I0819 10:29:02.328133    4789 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:29:02.328183    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:29:02.340002    4789 system_svc.go:56] duration metric: took 11.865981ms WaitForService to wait for kubelet
	I0819 10:29:02.340017    4789 kubeadm.go:582] duration metric: took 20.646222268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:29:02.340034    4789 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:29:02.518831    4789 request.go:632] Waited for 178.726274ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518969    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518980    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.518991    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.518998    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.522659    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.523326    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523339    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523348    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523351    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523354    4789 node_conditions.go:105] duration metric: took 183.311856ms to run NodePressure ...
	I0819 10:29:02.523361    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:29:02.523378    4789 start.go:255] writing updated cluster config ...
	I0819 10:29:02.544110    4789 out.go:201] 
	I0819 10:29:02.566227    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:02.566358    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.588965    4789 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:29:02.630777    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:29:02.630803    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:29:02.630953    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:29:02.630966    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:29:02.631053    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.631767    4789 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:29:02.631849    4789 start.go:364] duration metric: took 64.609µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:29:02.631869    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:29:02.631978    4789 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0819 10:29:02.652968    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:29:02.653116    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:29:02.653158    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:29:02.663539    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51204
	I0819 10:29:02.663925    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:29:02.664263    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:29:02.664277    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:29:02.664539    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:29:02.664672    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:02.664758    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:02.664867    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:29:02.664899    4789 client.go:168] LocalClient.Create starting
	I0819 10:29:02.664932    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:29:02.664992    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665005    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665051    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:29:02.665087    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665103    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665116    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:29:02.665122    4789 main.go:141] libmachine: (ha-431000-m03) Calling .PreCreateCheck
	I0819 10:29:02.665218    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.665228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:02.674109    4789 main.go:141] libmachine: Creating machine...
	I0819 10:29:02.674126    4789 main.go:141] libmachine: (ha-431000-m03) Calling .Create
	I0819 10:29:02.674302    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.674550    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.674293    4918 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:29:02.674675    4789 main.go:141] libmachine: (ha-431000-m03) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:29:02.956098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.955977    4918 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa...
	I0819 10:29:03.041212    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.041121    4918 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk...
	I0819 10:29:03.041230    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing magic tar header
	I0819 10:29:03.041239    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing SSH key tar header
	I0819 10:29:03.042098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.042003    4918 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03 ...
	I0819 10:29:03.582755    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.582783    4789 main.go:141] libmachine: (ha-431000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:29:03.582846    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:29:03.618942    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:29:03.618960    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:29:03.619021    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619049    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619085    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:29:03.619116    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:29:03.619133    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:29:03.621990    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Pid is 4921
	I0819 10:29:03.622461    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:29:03.622497    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.622585    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:03.623424    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:03.623486    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:03.623500    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:03.623537    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:03.623548    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:03.623558    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:03.623568    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:03.629643    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:29:03.638725    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:29:03.639577    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:03.639599    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:03.639609    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:03.639622    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.022361    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:29:04.022375    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:29:04.137228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:04.137262    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:04.137274    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:04.137284    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.138001    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:29:04.138016    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:29:05.623879    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 1
	I0819 10:29:05.623896    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:05.624023    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:05.624809    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:05.624873    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:05.624888    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:05.624904    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:05.624917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:05.624926    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:05.624935    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:07.626679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 2
	I0819 10:29:07.626696    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:07.626779    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:07.627539    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:07.627582    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:07.627592    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:07.627610    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:07.627619    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:07.627626    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:07.627635    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.627812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 3
	I0819 10:29:09.627828    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:09.627917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:09.628679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:09.628746    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:09.628777    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:09.628791    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:09.628799    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:09.628806    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:09.628812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.722721    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:29:09.722792    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:29:09.722802    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:29:09.745848    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:29:11.630390    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 4
	I0819 10:29:11.630407    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:11.630495    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:11.631275    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:11.631321    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:11.631331    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:11.631340    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:11.631359    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:11.631366    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:11.631387    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:13.633236    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 5
	I0819 10:29:13.633251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.633339    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.634147    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:13.634209    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0819 10:29:13.634221    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:29:13.634228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:29:13.634232    4789 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:29:13.634299    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:13.634943    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635064    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635157    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:29:13.635165    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:29:13.635251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.635310    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.636120    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:29:13.636129    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:29:13.636133    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:29:13.636138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:13.636228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:13.636309    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636392    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636477    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:13.636587    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:13.636755    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:13.636763    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:29:14.697546    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.697558    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:29:14.697564    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.697702    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.697798    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.697887    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.698009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.698168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.698318    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.698326    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:29:14.765778    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:29:14.765827    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:29:14.765833    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:29:14.765839    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.765977    4789 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:29:14.765988    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.766081    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.766185    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.766270    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766369    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766481    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.766635    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.766783    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.766792    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:29:14.841753    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:29:14.841769    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.841901    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.842009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842101    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842195    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.842324    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.842477    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.842489    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:29:14.911764    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.911779    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:29:14.911793    4789 buildroot.go:174] setting up certificates
	I0819 10:29:14.911800    4789 provision.go:84] configureAuth start
	I0819 10:29:14.911807    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.911942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:14.912037    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.912110    4789 provision.go:143] copyHostCerts
	I0819 10:29:14.912141    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912187    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:29:14.912193    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912326    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:29:14.912504    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912534    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:29:14.912539    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912651    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:29:14.912808    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912854    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:29:14.912859    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912935    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:29:14.913083    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:29:15.064390    4789 provision.go:177] copyRemoteCerts
	I0819 10:29:15.064440    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:29:15.064455    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.064599    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.064695    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.064786    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.064886    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:15.103656    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:29:15.103727    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:29:15.123430    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:29:15.123497    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:29:15.143265    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:29:15.143333    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:29:15.162885    4789 provision.go:87] duration metric: took 251.064942ms to configureAuth
	I0819 10:29:15.162900    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:29:15.163052    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:15.163065    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:15.163221    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.163329    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.163417    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163506    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.163693    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.163824    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.163831    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:29:15.225270    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:29:15.225286    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:29:15.225356    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:29:15.225368    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.225510    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.225619    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225708    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225810    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.225948    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.226090    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.226134    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:29:15.299640    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:29:15.299658    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.299797    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.299889    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.299978    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.300067    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.300202    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.300355    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.300368    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:29:16.819930    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:29:16.819945    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:29:16.819953    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetURL
	I0819 10:29:16.820095    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:29:16.820107    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:29:16.820113    4789 client.go:171] duration metric: took 14.154897138s to LocalClient.Create
	I0819 10:29:16.820124    4789 start.go:167] duration metric: took 14.154947877s to libmachine.API.Create "ha-431000"
	I0819 10:29:16.820129    4789 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:29:16.820136    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:29:16.820145    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.820288    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:29:16.820301    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.820396    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.820494    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.820582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.820664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:16.862693    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:29:16.866416    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:29:16.866431    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:29:16.866540    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:29:16.866725    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:29:16.866732    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:29:16.866944    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:29:16.874578    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:29:16.904910    4789 start.go:296] duration metric: took 84.771069ms for postStartSetup
	I0819 10:29:16.904942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:16.905569    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.905740    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:16.906122    4789 start.go:128] duration metric: took 14.273822612s to createHost
	I0819 10:29:16.906138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.906230    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.906303    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906387    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906475    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.906573    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:16.906690    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:16.906697    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:29:16.969389    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088556.958185685
	
	I0819 10:29:16.969401    4789 fix.go:216] guest clock: 1724088556.958185685
	I0819 10:29:16.969406    4789 fix.go:229] Guest: 2024-08-19 10:29:16.958185685 -0700 PDT Remote: 2024-08-19 10:29:16.906131 -0700 PDT m=+127.499217490 (delta=52.054685ms)
	I0819 10:29:16.969416    4789 fix.go:200] guest clock delta is within tolerance: 52.054685ms
	I0819 10:29:16.969419    4789 start.go:83] releasing machines lock for "ha-431000-m03", held for 14.337247496s
	I0819 10:29:16.969437    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.969573    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.992258    4789 out.go:177] * Found network options:
	I0819 10:29:17.014265    4789 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:29:17.037508    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.037542    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.037561    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038432    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038682    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038835    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:29:17.038873    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:29:17.038922    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.038957    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.039067    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:29:17.039087    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:17.039116    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039298    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039332    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039497    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039590    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039679    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039721    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:17.039809    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:29:17.074320    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:29:17.074385    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:29:17.120302    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:29:17.120318    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.120398    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.135851    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:29:17.144402    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:29:17.152735    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.152784    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:29:17.161185    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.169599    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:29:17.177908    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.186319    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:29:17.194967    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:29:17.203702    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:29:17.212228    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:29:17.220632    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:29:17.228164    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:29:17.235717    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.329551    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:29:17.348829    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.348909    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:29:17.363903    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.374976    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:29:17.393061    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.404238    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.414728    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:29:17.438632    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.449143    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.464536    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:29:17.467445    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:29:17.474809    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:29:17.488421    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:29:17.581504    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:29:17.684960    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.684980    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:29:17.699658    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.803979    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:30:18.773891    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.968555005s)
	I0819 10:30:18.774012    4789 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:30:18.808676    4789 out.go:201] 
	W0819 10:30:18.829152    4789 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:29:15 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570013158Z" level=info msg="Starting up"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570447745Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.572542412Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.584880924Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603137975Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603181724Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603219390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603233227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603303033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603338653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603471354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603509282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603521199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603528665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603591360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603811486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605351283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605389063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605504861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605538594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605610859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605677674Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607907354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607976584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607991948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608010711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608023403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608093276Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608724366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608874333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608913351Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608929178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608943960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608968346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609006571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609021660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609032833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609044499Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609055485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609066063Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609088279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609103865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609115537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609130257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609139734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609151164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609161605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609173829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609185591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609200246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609211000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609224200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609237871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609251525Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609296616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609316285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609327369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609362155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609478815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609512436Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609530768Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609541857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609553085Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609563545Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610591556Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610680787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610769049Z" level=info msg="containerd successfully booted in 0.026402s"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.601341697Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.606766805Z" level=info msg="Loading containers: start."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.688780306Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.769433920Z" level=info msg="Loading containers: done."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776749571Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776865122Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.804822251Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.805010917Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:29:16 ha-431000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.814047535Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:29:17 ha-431000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815466623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815881336Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815956644Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.816022765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:18 ha-431000-m03 dockerd[921]: time="2024-08-19T17:29:18.853267859Z" level=info msg="Starting up"
	Aug 19 17:30:18 ha-431000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:30:18.829235    4789 out.go:270] * 
	W0819 10:30:18.830413    4789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:30:18.888275    4789 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:28:07 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c3745c7f8fb9ffda1a9528dbab0743afd132acd46a2634643d4b5a24035dc2e4/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/868ee98671e833d733f787480bd37f293c8c6eb8b4092a75c7b96c7993f5f451/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:28:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/74fd2f09b011aa0f318ae4259efd3f3d52dc61d0bd78f032481d1a46763eeaae/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.132794637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133043856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133186443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.133435141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139175494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139344496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139355701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.139421519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157876304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157962624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.157975535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:28:08 ha-431000 dockerd[1275]: time="2024-08-19T17:28:08.158198941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621287999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621447365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621465217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621560978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6d38fc70c811c9647892071fd07ef2e6455806b20e204cd6583df80c81ba64b7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 19 17:30:23 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040175789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040258993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040272849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040810082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da6e4a61b6cf8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   6d38fc70c811c       busybox-7dff88458-x7m6m
	b9d1bccf00c94       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   74fd2f09b011a       coredns-6f6b679f8f-hr2qx
	e7cacf032435f       6e38f40d628db                                                                                         14 minutes ago      Running             storage-provisioner       0                   868ee98671e83       storage-provisioner
	a3891ab602da5       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   c3745c7f8fb9f       coredns-6f6b679f8f-vc76p
	37cd2e9ed2f34       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              15 minutes ago      Running             kindnet-cni               0                   568b6f1ff9aaf       kindnet-lvdbg
	889ab608901bb       ad83b2ca7b09e                                                                                         15 minutes ago      Running             kube-proxy                0                   fde7b27c3d1a5       kube-proxy-5l56s
	ed733554ed160       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   90ec229d87c2c       kube-vip-ha-431000
	11d9cd3b2f49f       1766f54c897f0                                                                                         15 minutes ago      Running             kube-scheduler            0                   4c252909f338f       kube-scheduler-ha-431000
	262471364c991       604f5db92eaa8                                                                                         15 minutes ago      Running             kube-apiserver            0                   5a0fe916eaf1d       kube-apiserver-ha-431000
	39fe08877284d       2e96e5913fc06                                                                                         15 minutes ago      Running             etcd                      0                   fc30d54d1b565       etcd-ha-431000
	2801f8f44773b       045733566833c                                                                                         15 minutes ago      Running             kube-controller-manager   0                   80d21805f230b       kube-controller-manager-ha-431000
	
	
	==> coredns [a3891ab602da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40841 - 35632 "HINFO IN 8043641794425982319.4992720317295253252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008506209s
	[INFO] 10.244.1.2:51889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132717s
	[INFO] 10.244.1.2:37985 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001601417s
	[INFO] 10.244.1.2:55682 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.007910651s
	[INFO] 10.244.0.4:38616 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.000569215s
	[INFO] 10.244.0.4:47772 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000054313s
	[INFO] 10.244.1.2:49768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135774s
	[INFO] 10.244.1.2:55729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00095124s
	[INFO] 10.244.1.2:38602 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000089444s
	[INFO] 10.244.1.2:52875 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099022s
	[INFO] 10.244.1.2:49308 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063848s
	[INFO] 10.244.0.4:57863 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000064923s
	[INFO] 10.244.0.4:40409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096347s
	[INFO] 10.244.1.2:34617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084305s
	[INFO] 10.244.1.2:55843 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058734s
	[INFO] 10.244.0.4:43213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096675s
	[INFO] 10.244.0.4:44050 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031036s
	[INFO] 10.244.1.2:49077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105574s
	[INFO] 10.244.1.2:57560 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000084227s
	[INFO] 10.244.1.2:40959 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135434s
	
	
	==> coredns [b9d1bccf00c9] <==
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54195 - 29045 "HINFO IN 6513715404119561949.1799819676960271336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.007921235s
	[INFO] 10.244.1.2:45210 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.055498798s
	[INFO] 10.244.0.4:53730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111076s
	[INFO] 10.244.0.4:51704 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000411643s
	[INFO] 10.244.1.2:54559 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088744s
	[INFO] 10.244.1.2:58642 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064137s
	[INFO] 10.244.1.2:34281 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000845538s
	[INFO] 10.244.0.4:53439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000058375s
	[INFO] 10.244.0.4:33951 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106207s
	[INFO] 10.244.0.4:38202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000034691s
	[INFO] 10.244.0.4:46478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119286s
	[INFO] 10.244.0.4:53704 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053613s
	[INFO] 10.244.0.4:42766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051163s
	[INFO] 10.244.1.2:44413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116167s
	[INFO] 10.244.1.2:58453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067066s
	[INFO] 10.244.0.4:37472 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063597s
	[INFO] 10.244.0.4:59559 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033396s
	[INFO] 10.244.1.2:59906 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000120736s
	[INFO] 10.244.0.4:47175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120659s
	[INFO] 10.244.0.4:56722 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121072s
	[INFO] 10.244.0.4:43652 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174608s
	[INFO] 10.244.0.4:32818 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00017028s
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:27:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:42:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:27:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:41:01 +0000   Mon, 19 Aug 2024 17:28:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7b5b85e2c64405f969f3e24eb671b2e
	  System UUID:                7f844fbb-0000-0000-b5d6-699bdfe1640c
	  Boot ID:                    cb211998-dc9c-4fd5-a169-3f6eeb2403fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x7m6m              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-hr2qx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-vc76p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-431000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-lvdbg                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-431000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-431000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-5l56s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-431000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-431000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeReady                14m                kubelet          Node ha-431000 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	
	
	Name:               ha-431000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:28:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:42:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:40:53 +0000   Mon, 19 Aug 2024 17:28:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-431000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 21fb6f298fbf435c88fd6e9f9b50e04f
	  System UUID:                decf4e23-0000-0000-95db-084dbcc69753
	  Boot ID:                    330a7904-5229-4d07-9792-de118102386c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2l9lq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-431000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-qmgqd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-431000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-431000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-5h7j2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-431000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-431000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	
	
	Name:               ha-431000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_42_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:42:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:43:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:43:00 +0000   Mon, 19 Aug 2024 17:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:43:00 +0000   Mon, 19 Aug 2024 17:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:43:00 +0000   Mon, 19 Aug 2024 17:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:43:00 +0000   Mon, 19 Aug 2024 17:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-431000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e639484a1c98402fa6d9e2bb5fe71e03
	  System UUID:                c32a4140-0000-0000-838a-ef53ae6c724a
	  Boot ID:                    65e77bd5-3b1f-49d0-a224-e0cd2d7b346a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wfcpq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kindnet-kcrzx              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      33s
	  kube-system                 kube-proxy-2fn5w           0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  33s (x2 over 33s)  kubelet          Node ha-431000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x2 over 33s)  kubelet          Node ha-431000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x2 over 33s)  kubelet          Node ha-431000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  33s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           31s                node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  NodeReady                10s                kubelet          Node ha-431000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.712596] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.230971] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.519395] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.106046] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.754357] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.260100] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.108326] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.116397] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.050322] kauditd_printk_skb: 139 callbacks suppressed
	[  +2.370658] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.100232] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.114416] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.133019] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.706453] systemd-fstab-generator[1261]: Ignoring "noauto" option for root device
	[  +0.055873] kauditd_printk_skb: 136 callbacks suppressed
	[  +2.542020] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +4.524199] systemd-fstab-generator[1651]: Ignoring "noauto" option for root device
	[  +0.058523] kauditd_printk_skb: 70 callbacks suppressed
	[  +7.145787] systemd-fstab-generator[2146]: Ignoring "noauto" option for root device
	[  +0.090131] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.001426] kauditd_printk_skb: 35 callbacks suppressed
	[Aug19 17:28] kauditd_printk_skb: 15 callbacks suppressed
	[ +36.695422] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [39fe08877284] <==
	{"level":"info","ts":"2024-08-19T17:28:39.577230Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577486Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577607Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577632Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858","remote-peer-urls":["https://192.169.0.6:2380"]}
	{"level":"info","ts":"2024-08-19T17:28:39.577678Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577764Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.577976Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:39.578023Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.582369Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T17:28:40.582407Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.582418Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.596476Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.597370Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T17:28:40.597585Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:40.605913Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:28:41.107824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860 13991592590719088728)"}
	{"level":"info","ts":"2024-08-19T17:28:41.107895Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-19T17:28:41.107911Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:32:31.484329Z","caller":"traceutil/trace.go:171","msg":"trace[1768622606] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"105.97642ms","start":"2024-08-19T17:32:31.378330Z","end":"2024-08-19T17:32:31.484306Z","steps":["trace[1768622606] 'process raft request'  (duration: 69.010204ms)","trace[1768622606] 'compare'  (duration: 36.887791ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:37:40.726136Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1233}
	{"level":"info","ts":"2024-08-19T17:37:40.747676Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1233,"took":"20.998439ms","hash":1199177849,"current-db-size-bytes":3051520,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-19T17:37:40.747929Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1199177849,"revision":1233,"compact-revision":-1}
	{"level":"info","ts":"2024-08-19T17:42:40.732325Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1862}
	{"level":"info","ts":"2024-08-19T17:42:40.746963Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1862,"took":"13.211731ms","hash":857120674,"current-db-size-bytes":3051520,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1675264,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-19T17:42:40.747021Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":857120674,"revision":1862,"compact-revision":1233}
	
	
	==> kernel <==
	 17:43:02 up 15 min,  0 users,  load average: 0.04, 0.11, 0.09
	Linux ha-431000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [37cd2e9ed2f3] <==
	I0819 17:42:13.913166       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:42:13.913176       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:23.920285       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:42:23.920446       1 main.go:299] handling current node
	I0819 17:42:23.920502       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:42:23.920520       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:33.912776       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:42:33.912941       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:33.913148       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:42:33.913243       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:42:33.913373       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.8 Flags: [] Table: 0} 
	I0819 17:42:33.913565       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:42:33.913609       1 main.go:299] handling current node
	I0819 17:42:43.915583       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:42:43.915670       1 main.go:299] handling current node
	I0819 17:42:43.915684       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:42:43.915691       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:43.915840       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:42:43.915938       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:42:53.914225       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:42:53.914609       1 main.go:299] handling current node
	I0819 17:42:53.914774       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:42:53.914814       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:53.914944       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:42:53.915297       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [262471364c99] <==
	I0819 17:27:42.843862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0819 17:27:42.851035       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0819 17:27:42.851176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 17:27:43.131229       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 17:27:43.156609       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 17:27:43.228677       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 17:27:43.232630       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0819 17:27:43.233263       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:27:43.235521       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 17:27:43.816793       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 17:27:45.642805       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 17:27:45.648554       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 17:27:45.656204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 17:27:49.372173       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 17:27:49.521616       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0819 17:41:58.471372       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51273: use of closed network connection
	E0819 17:41:58.792809       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51278: use of closed network connection
	E0819 17:41:58.976708       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51280: use of closed network connection
	E0819 17:41:59.288867       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51285: use of closed network connection
	E0819 17:41:59.474614       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51287: use of closed network connection
	E0819 17:41:59.785950       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51292: use of closed network connection
	E0819 17:42:02.821757       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51320: use of closed network connection
	E0819 17:42:03.005704       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51322: use of closed network connection
	E0819 17:42:03.316458       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51327: use of closed network connection
	E0819 17:42:03.527436       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:51329: use of closed network connection
	
	
	==> kube-controller-manager [2801f8f44773] <==
	I0819 17:41:01.735584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000"
	E0819 17:42:29.279378       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-nzg89 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-nzg89\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0819 17:42:29.577488       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-431000-m04\" does not exist"
	I0819 17:42:29.587389       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-431000-m04" podCIDRs=["10.244.2.0/24"]
	I0819 17:42:29.587695       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:29.587776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:29.597406       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:29.968943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:30.304809       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:30.365012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.195µs"
	I0819 17:42:32.043252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:33.778806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:33.779606       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-431000-m04"
	I0819 17:42:33.857848       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:39.645314       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:52.547283       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:52.548660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-431000-m04"
	I0819 17:42:52.555756       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:52.559687       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.641µs"
	I0819 17:42:52.568999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.897µs"
	I0819 17:42:52.574921       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.923µs"
	I0819 17:42:53.790919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:54.429233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.87659ms"
	I0819 17:42:54.429711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.036µs"
	I0819 17:43:00.100260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	
	
	==> kube-proxy [889ab608901b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:27:50.162614       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:27:50.171417       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0819 17:27:50.171450       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:27:50.239161       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:27:50.239202       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:27:50.239220       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:27:50.242102       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:27:50.242306       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:27:50.242335       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:27:50.253458       1 config.go:197] "Starting service config controller"
	I0819 17:27:50.253497       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:27:50.253518       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:27:50.253542       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:27:50.253889       1 config.go:326] "Starting node config controller"
	I0819 17:27:50.253915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:27:50.354735       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:27:50.354788       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:27:50.354817       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11d9cd3b2f49] <==
	W0819 17:27:42.867998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:27:42.868077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.900445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:27:42.900541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.970545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:27:42.970765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:43.004003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:27:43.004103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:27:43.339820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:30:22.272037       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	E0819 17:30:22.273195       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e37fe27d-f1bf-427d-a76d-96722b0c74a1(default/busybox-7dff88458-x7m6m) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x7m6m"
	E0819 17:30:22.273433       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" pod="default/busybox-7dff88458-x7m6m"
	I0819 17:30:22.273582       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	E0819 17:42:29.626807       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kcrzx\": pod kindnet-kcrzx is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kcrzx" node="ha-431000-m04"
	E0819 17:42:29.626857       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4d8e74ea-456c-476b-951f-c880eb642788(kube-system/kindnet-kcrzx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kcrzx"
	E0819 17:42:29.626868       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kcrzx\": pod kindnet-kcrzx is already assigned to node \"ha-431000-m04\"" pod="kube-system/kindnet-kcrzx"
	I0819 17:42:29.626879       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kcrzx" node="ha-431000-m04"
	E0819 17:42:29.628487       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2fn5w\": pod kube-proxy-2fn5w is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2fn5w" node="ha-431000-m04"
	E0819 17:42:29.628792       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bca1b722-fe85-4f4b-a536-8228357812a4(kube-system/kube-proxy-2fn5w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2fn5w"
	E0819 17:42:29.628962       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2fn5w\": pod kube-proxy-2fn5w is already assigned to node \"ha-431000-m04\"" pod="kube-system/kube-proxy-2fn5w"
	I0819 17:42:29.629175       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2fn5w" node="ha-431000-m04"
	E0819 17:42:52.562727       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wfcpq\": pod busybox-7dff88458-wfcpq is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wfcpq" node="ha-431000-m04"
	E0819 17:42:52.562826       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c7d1dd4a-aba7-4c8f-be2e-0dc5cdb4faf7(default/busybox-7dff88458-wfcpq) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wfcpq"
	E0819 17:42:52.562855       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wfcpq\": pod busybox-7dff88458-wfcpq is already assigned to node \"ha-431000-m04\"" pod="default/busybox-7dff88458-wfcpq"
	I0819 17:42:52.562878       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wfcpq" node="ha-431000-m04"
	
	
	==> kubelet <==
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:38:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:38:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:39:45 ha-431000 kubelet[2153]: E0819 17:39:45.526214    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:39:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:39:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:40:45 ha-431000 kubelet[2153]: E0819 17:40:45.529172    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:40:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:40:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:41:45 ha-431000 kubelet[2153]: E0819 17:41:45.526920    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:41:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:41:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:41:59 ha-431000 kubelet[2153]: E0819 17:41:59.290192    2153 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49834->127.0.0.1:35619: write tcp 127.0.0.1:49834->127.0.0.1:35619: write: broken pipe
	Aug 19 17:42:45 ha-431000 kubelet[2153]: E0819 17:42:45.526621    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:42:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:42:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:42:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:42:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (3.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (73.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 node stop m02 -v=7 --alsologtostderr: (8.355944233s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 7 (17.170180251s)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-431000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:43:12.144390    6301 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:43:12.144690    6301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:43:12.144695    6301 out.go:358] Setting ErrFile to fd 2...
	I0819 10:43:12.144699    6301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:43:12.144870    6301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:43:12.145067    6301 out.go:352] Setting JSON to false
	I0819 10:43:12.145088    6301 mustload.go:65] Loading cluster: ha-431000
	I0819 10:43:12.145126    6301 notify.go:220] Checking for updates...
	I0819 10:43:12.145404    6301 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:43:12.145425    6301 status.go:255] checking status of ha-431000 ...
	I0819 10:43:12.145807    6301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:12.145846    6301 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:12.154800    6301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51534
	I0819 10:43:12.155184    6301 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:12.155622    6301 main.go:141] libmachine: Using API Version  1
	I0819 10:43:12.155630    6301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:12.155892    6301 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:12.156004    6301 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:43:12.156088    6301 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:43:12.156174    6301 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:43:12.157140    6301 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:43:12.157159    6301 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:43:12.157409    6301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:12.157436    6301 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:12.165937    6301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51536
	I0819 10:43:12.166271    6301 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:12.166627    6301 main.go:141] libmachine: Using API Version  1
	I0819 10:43:12.166647    6301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:12.166903    6301 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:12.167054    6301 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:43:12.167163    6301 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:43:12.167441    6301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:12.167471    6301 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:12.179728    6301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51538
	I0819 10:43:12.180089    6301 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:12.180412    6301 main.go:141] libmachine: Using API Version  1
	I0819 10:43:12.180421    6301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:12.180629    6301 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:12.180739    6301 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:43:12.180879    6301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:43:12.180895    6301 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:43:12.180977    6301 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:43:12.181055    6301 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:43:12.181130    6301 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:43:12.181200    6301 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:43:12.214618    6301 ssh_runner.go:195] Run: systemctl --version
	I0819 10:43:12.219373    6301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:43:12.236690    6301 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:43:12.236714    6301 api_server.go:166] Checking apiserver status ...
	I0819 10:43:12.236756    6301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:43:12.255722    6301 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup
	W0819 10:43:12.266384    6301 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2035/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:43:12.266443    6301 ssh_runner.go:195] Run: ls
	I0819 10:43:12.276487    6301 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:43:12.283005    6301 api_server.go:279] https://192.169.0.254:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 10:43:12.283035    6301 retry.go:31] will retry after 223.68449ms: https://192.169.0.254:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 10:43:12.507301    6301 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:43:12.513138    6301 api_server.go:279] https://192.169.0.254:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 10:43:12.513160    6301 retry.go:31] will retry after 369.066996ms: https://192.169.0.254:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 10:43:12.883146    6301 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:43:12.892238    6301 api_server.go:279] https://192.169.0.254:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 10:43:12.892261    6301 retry.go:31] will retry after 338.174358ms: https://192.169.0.254:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 10:43:13.230604    6301 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:43:18.231026    6301 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 10:43:18.231052    6301 retry.go:31] will retry after 377.696347ms: state is "Stopped"
	I0819 10:43:18.608865    6301 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:43:21.659565    6301 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: network is unreachable
	I0819 10:43:21.659604    6301 retry.go:31] will retry after 520.319329ms: state is "Stopped"
	I0819 10:43:22.180127    6301 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:43:25.244170    6301 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: network is unreachable
	I0819 10:43:25.244199    6301 retry.go:31] will retry after 734.551978ms: state is "Stopped"
	I0819 10:43:25.979401    6301 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:43:29.082751    6301 api_server.go:269] stopped: https://192.169.0.254:8443/healthz: Get "https://192.169.0.254:8443/healthz": dial tcp 192.169.0.254:8443: connect: network is unreachable
	I0819 10:43:29.082781    6301 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:43:29.082796    6301 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:43:29.082809    6301 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:43:29.083091    6301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:29.083114    6301 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:29.091884    6301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51548
	I0819 10:43:29.092254    6301 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:29.092601    6301 main.go:141] libmachine: Using API Version  1
	I0819 10:43:29.092612    6301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:29.092847    6301 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:29.092987    6301 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:43:29.093068    6301 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:43:29.093176    6301 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:43:29.094097    6301 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 4850 missing from process table
	I0819 10:43:29.094132    6301 status.go:330] ha-431000-m02 host status = "Stopped" (err=<nil>)
	I0819 10:43:29.094140    6301 status.go:343] host is not running, skipping remaining checks
	I0819 10:43:29.094148    6301 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:43:29.094159    6301 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:43:29.094418    6301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:29.094442    6301 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:29.103402    6301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51550
	I0819 10:43:29.103745    6301 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:29.104063    6301 main.go:141] libmachine: Using API Version  1
	I0819 10:43:29.104074    6301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:29.104307    6301 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:29.104430    6301 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:43:29.104516    6301 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:43:29.104602    6301 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:43:29.105577    6301 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:43:29.105586    6301 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:43:29.105839    6301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:29.105872    6301 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:29.114737    6301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51552
	I0819 10:43:29.115113    6301 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:29.115467    6301 main.go:141] libmachine: Using API Version  1
	I0819 10:43:29.115479    6301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:29.115701    6301 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:29.115822    6301 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:43:29.115916    6301 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:43:29.116177    6301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:29.116199    6301 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:29.125176    6301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51554
	I0819 10:43:29.125546    6301 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:29.125919    6301 main.go:141] libmachine: Using API Version  1
	I0819 10:43:29.125934    6301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:29.126146    6301 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:29.126292    6301 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:43:29.126434    6301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:43:29.126446    6301 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:43:29.126532    6301 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:43:29.126611    6301 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:43:29.126697    6301 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:43:29.126775    6301 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:43:29.161886    6301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:43:29.174037    6301 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:43:29.174051    6301 api_server.go:166] Checking apiserver status ...
	I0819 10:43:29.174093    6301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:43:29.185015    6301 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:43:29.185032    6301 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:43:29.185045    6301 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:43:29.185065    6301 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:43:29.185365    6301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:29.185394    6301 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:29.194417    6301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51557
	I0819 10:43:29.194775    6301 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:29.195124    6301 main.go:141] libmachine: Using API Version  1
	I0819 10:43:29.195164    6301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:29.195382    6301 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:29.195493    6301 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:43:29.195579    6301 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:43:29.195660    6301 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:43:29.196679    6301 status.go:330] ha-431000-m04 host status = "Running" (err=<nil>)
	I0819 10:43:29.196689    6301 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:43:29.196950    6301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:29.196977    6301 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:29.205825    6301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51559
	I0819 10:43:29.206173    6301 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:29.206530    6301 main.go:141] libmachine: Using API Version  1
	I0819 10:43:29.206549    6301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:29.206765    6301 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:29.206869    6301 main.go:141] libmachine: (ha-431000-m04) Calling .GetIP
	I0819 10:43:29.206949    6301 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:43:29.207211    6301 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:43:29.207235    6301 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:43:29.216197    6301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51561
	I0819 10:43:29.216764    6301 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:43:29.217105    6301 main.go:141] libmachine: Using API Version  1
	I0819 10:43:29.217116    6301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:43:29.217335    6301 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:43:29.217459    6301 main.go:141] libmachine: (ha-431000-m04) Calling .DriverName
	I0819 10:43:29.217617    6301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:43:29.217629    6301 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHHostname
	I0819 10:43:29.217713    6301 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHPort
	I0819 10:43:29.217793    6301 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHKeyPath
	I0819 10:43:29.217886    6301 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHUsername
	I0819 10:43:29.218001    6301 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m04/id_rsa Username:docker}
	I0819 10:43:29.246501    6301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:43:29.258315    6301 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr": ha-431000
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-431000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m03
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-431000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr": ha-431000
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-431000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m03
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-431000-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000: exit status 2 (16.907168956s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (15.106919606s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT |                     |
	|         | busybox-7dff88458-wfcpq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-431000 -v=7                | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-431000 node stop m02 -v=7         | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:43 PDT | 19 Aug 24 10:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:27:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:27:09.441458    4789 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:27:09.441716    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441721    4789 out.go:358] Setting ErrFile to fd 2...
	I0819 10:27:09.441725    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441914    4789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:27:09.443405    4789 out.go:352] Setting JSON to false
	I0819 10:27:09.468451    4789 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3399,"bootTime":1724085030,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:27:09.468547    4789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:27:09.554597    4789 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:27:09.577770    4789 notify.go:220] Checking for updates...
	I0819 10:27:09.609734    4789 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:27:09.676944    4789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:09.699980    4789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:27:09.722951    4789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:27:09.744804    4789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.765726    4789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:27:09.787204    4789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:27:09.817679    4789 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 10:27:09.859821    4789 start.go:297] selected driver: hyperkit
	I0819 10:27:09.859849    4789 start.go:901] validating driver "hyperkit" against <nil>
	I0819 10:27:09.859893    4789 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:27:09.864287    4789 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.864395    4789 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:27:09.872759    4789 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:27:09.876743    4789 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.876768    4789 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:27:09.876803    4789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:27:09.877011    4789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:27:09.877072    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:09.877082    4789 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 10:27:09.877094    4789 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:27:09.877164    4789 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:09.877251    4789 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.919755    4789 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:27:09.940604    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:09.940675    4789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:27:09.940720    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:09.940918    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:09.940931    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:09.941271    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:09.941299    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json: {Name:mkf9dcbb24d8b9fbe62d81f81a7a87fec457d2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:09.941835    4789 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:09.941963    4789 start.go:364] duration metric: took 95.166µs to acquireMachinesLock for "ha-431000"
	I0819 10:27:09.941997    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:09.942082    4789 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 10:27:09.963791    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:09.964075    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.964148    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:09.974068    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51111
	I0819 10:27:09.974512    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:09.974919    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:09.974932    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:09.975172    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:09.975283    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:09.975374    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:09.975471    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:09.975492    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:09.975527    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:09.975578    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975594    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975657    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:09.975695    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975707    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975719    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:09.975729    4789 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0819 10:27:09.975800    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.975970    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:09.976388    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:09.976397    4789 main.go:141] libmachine: (ha-431000) Calling .Create
	I0819 10:27:09.976462    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.976580    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:09.976459    4799 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.976633    4789 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:10.160305    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.160220    4799 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa...
	I0819 10:27:10.258779    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.258678    4799 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0819 10:27:10.258792    4789 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0819 10:27:10.258800    4789 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0819 10:27:10.259681    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.259588    4799 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000 ...
	I0819 10:27:10.634434    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.634476    4789 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:27:10.634529    4789 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:27:10.744945    4789 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:27:10.744966    4789 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:10.744993    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745030    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745065    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:10.745094    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:10.745118    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:10.748020    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Pid is 4802
	I0819 10:27:10.748404    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:27:10.748413    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.748494    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:10.749357    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:10.749398    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:10.749412    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:10.749423    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:10.749431    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:10.755634    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:10.806699    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:10.807300    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:10.807314    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:10.807322    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:10.807335    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.184562    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:11.184575    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:11.299194    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:11.299213    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:11.299228    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:11.299236    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.300075    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:11.300086    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:12.750038    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0819 10:27:12.750054    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:12.750189    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:12.750969    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:12.751019    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:12.751030    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:12.751039    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:12.751052    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:14.752158    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0819 10:27:14.752174    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:14.752264    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:14.753040    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:14.753090    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:14.753102    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:14.753111    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:14.753117    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.754325    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0819 10:27:16.754340    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:16.754402    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:16.755326    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:16.755347    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:16.755354    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:16.755373    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:16.755390    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.856153    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:27:16.856252    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:27:16.856262    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:27:16.880804    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:27:18.757489    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0819 10:27:18.757504    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:18.757601    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:18.758394    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:18.758435    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:18.758449    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:18.758481    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:18.758495    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:20.758927    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0819 10:27:20.758946    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.759035    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.759848    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:20.759873    4789 main.go:141] libmachine: (ha-431000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:20.759888    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:20.759901    4789 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:27:20.759913    4789 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:27:20.759952    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:20.760523    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760634    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760741    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:27:20.760753    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:20.760839    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.760885    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.761678    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:27:20.761690    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:27:20.761696    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:27:20.761702    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:20.761795    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:20.761883    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.761969    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.762060    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:20.762168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:20.762361    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:20.762369    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:27:21.818394    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.818406    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:27:21.818419    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.818554    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.818654    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818747    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818841    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.818981    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.819131    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.819139    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:27:21.870784    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:27:21.870826    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:27:21.870831    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:27:21.870837    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.870976    4789 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:27:21.870986    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.871077    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.871169    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.871272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871352    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871452    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.871577    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.871711    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.871719    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:27:21.937676    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:27:21.937694    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.937826    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.937927    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938017    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938112    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.938245    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.938391    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.938402    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:27:21.996654    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.996676    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:27:21.996692    4789 buildroot.go:174] setting up certificates
	I0819 10:27:21.996701    4789 provision.go:84] configureAuth start
	I0819 10:27:21.996714    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.996873    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:21.996990    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.997094    4789 provision.go:143] copyHostCerts
	I0819 10:27:21.997133    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997201    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:27:21.997209    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997337    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:27:21.997534    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997567    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:27:21.997572    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997714    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:27:21.997882    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.997926    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:27:21.997941    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.998049    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:27:21.998203    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:27:22.044837    4789 provision.go:177] copyRemoteCerts
	I0819 10:27:22.044896    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:27:22.044908    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.045021    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.045107    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.045191    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.045288    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:22.078701    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:27:22.078779    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:27:22.098027    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:27:22.098092    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 10:27:22.117169    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:27:22.117235    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:27:22.137411    4789 provision.go:87] duration metric: took 140.68689ms to configureAuth
	I0819 10:27:22.137424    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:27:22.137558    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:22.137574    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:22.137700    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.137783    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.137859    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.137942    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.138028    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.138134    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.138266    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.138274    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:27:22.191384    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:27:22.191397    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:27:22.191469    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:27:22.191481    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.191636    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.191724    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191834    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191924    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.192051    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.192193    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.192236    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:27:22.256138    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:27:22.256165    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.256301    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.256391    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256475    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256578    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.256695    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.256839    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.256851    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:27:23.816844    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:27:23.816860    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:27:23.816871    4789 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0819 10:27:23.817008    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:27:23.817016    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:27:23.817020    4789 client.go:171] duration metric: took 13.841219093s to LocalClient.Create
	I0819 10:27:23.817036    4789 start.go:167] duration metric: took 13.84126124s to libmachine.API.Create "ha-431000"
	I0819 10:27:23.817044    4789 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:27:23.817051    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:27:23.817063    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.817219    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:27:23.817232    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.817321    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.817402    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.817497    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.817595    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.852993    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:27:23.857771    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:27:23.857792    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:27:23.857909    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:27:23.858094    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:27:23.858100    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:27:23.858323    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:27:23.868639    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:23.894485    4789 start.go:296] duration metric: took 77.430316ms for postStartSetup
	I0819 10:27:23.894509    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:23.895099    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.895256    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:23.895585    4789 start.go:128] duration metric: took 13.953185373s to createHost
	I0819 10:27:23.895598    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.895691    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.895790    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895879    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895966    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.896069    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:23.896228    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:23.896236    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:27:23.956133    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088443.744394113
	
	I0819 10:27:23.956145    4789 fix.go:216] guest clock: 1724088443.744394113
	I0819 10:27:23.956151    4789 fix.go:229] Guest: 2024-08-19 10:27:23.744394113 -0700 PDT Remote: 2024-08-19 10:27:23.895593 -0700 PDT m=+14.491162031 (delta=-151.198887ms)
	I0819 10:27:23.956169    4789 fix.go:200] guest clock delta is within tolerance: -151.198887ms
	I0819 10:27:23.956173    4789 start.go:83] releasing machines lock for "ha-431000", held for 14.013893151s
	I0819 10:27:23.956192    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956322    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.956416    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956749    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956860    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956951    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:27:23.956980    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957023    4789 ssh_runner.go:195] Run: cat /version.json
	I0819 10:27:23.957036    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957073    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957109    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957170    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957184    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957292    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957350    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.957384    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:24.032926    4789 ssh_runner.go:195] Run: systemctl --version
	I0819 10:27:24.037723    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:27:24.041939    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:27:24.041985    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:27:24.055424    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:27:24.055435    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.055529    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.070257    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:27:24.079169    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:27:24.088264    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.088319    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:27:24.097172    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.105902    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:27:24.114585    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.123406    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:27:24.132626    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:27:24.141378    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:27:24.150490    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:27:24.158980    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:27:24.167068    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:27:24.175030    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.269460    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:27:24.289328    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.289405    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:27:24.304907    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.317291    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:27:24.330289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.340851    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.351456    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:27:24.376914    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.387402    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.402522    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:27:24.405426    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:27:24.412799    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:27:24.426019    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:27:24.528550    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:27:24.636829    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.636893    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:27:24.652027    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.753641    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:27.037286    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.283575266s)
	I0819 10:27:27.037346    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:27:27.047775    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:27:27.062961    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.074027    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:27:27.172330    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:27:27.284593    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.395779    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:27:27.409552    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.420868    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.532356    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:27:27.591558    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:27:27.591636    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:27:27.595967    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:27:27.596013    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:27:27.599275    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:27:27.625101    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:27:27.625173    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.642636    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.693299    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:27:27.693355    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:27.693783    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:27:27.698129    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:27.708916    4789 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:27:27.708982    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:27.709038    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:27.721971    4789 docker.go:685] Got preloaded images: 
	I0819 10:27:27.721984    4789 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0819 10:27:27.722034    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:27.730353    4789 ssh_runner.go:195] Run: which lz4
	I0819 10:27:27.733218    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 10:27:27.733323    4789 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 10:27:27.736425    4789 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 10:27:27.736445    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0819 10:27:28.750864    4789 docker.go:649] duration metric: took 1.017557348s to copy over tarball
	I0819 10:27:28.750956    4789 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 10:27:31.074672    4789 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.323648699s)
	I0819 10:27:31.074688    4789 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 10:27:31.100633    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:31.109680    4789 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0819 10:27:31.123335    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:31.234501    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:33.578614    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.344043512s)
	I0819 10:27:33.578701    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:33.592021    4789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 10:27:33.592040    4789 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:27:33.592048    4789 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:27:33.592132    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:27:33.592198    4789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:27:33.629283    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:33.629295    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:33.629309    4789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:27:33.629329    4789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:27:33.629424    4789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:27:33.629439    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:27:33.629491    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:27:33.642904    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:27:33.642969    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:27:33.643018    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:27:33.652008    4789 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:27:33.652070    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:27:33.660066    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:27:33.673571    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:27:33.686700    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:27:33.700085    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0819 10:27:33.713804    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:27:33.716661    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:33.726684    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:33.822205    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:27:33.836833    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:27:33.836844    4789 certs.go:194] generating shared ca certs ...
	I0819 10:27:33.836855    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.837051    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:27:33.837132    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:27:33.837142    4789 certs.go:256] generating profile certs ...
	I0819 10:27:33.837189    4789 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:27:33.837203    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0819 10:27:33.888319    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt ...
	I0819 10:27:33.888333    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt: {Name:mk2ecc34873277fbe11bf267ec0d97684e18e84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888666    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key ...
	I0819 10:27:33.888675    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key: {Name:mk51abee214c838f4621902241303fe73ba93aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888900    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e
	I0819 10:27:33.888915    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0819 10:27:34.060027    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e ...
	I0819 10:27:34.060046    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e: {Name:mk108eb9cf88ab2aae15883e4a3724751adb3118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060347    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e ...
	I0819 10:27:34.060356    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e: {Name:mk8fae11cce9c9a45d3e151953d1ee9ab2cc82d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060557    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:27:34.060759    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:27:34.060929    4789 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:27:34.060943    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0819 10:27:34.243675    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0819 10:27:34.243690    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mkeb1eac7ee8b3901067565b7ff883710f2d1088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244061    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key ...
	I0819 10:27:34.244069    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key: {Name:mkc1afcd7a6a9a572716155e33c32e7def81650b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244312    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:27:34.244340    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:27:34.244378    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:27:34.244398    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:27:34.244416    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:27:34.244448    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:27:34.244486    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:27:34.244521    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:27:34.244615    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:27:34.244666    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:27:34.244675    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:27:34.244748    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:27:34.244776    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:27:34.244831    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:27:34.244909    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:34.244942    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.244990    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.245007    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.245522    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:27:34.267677    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:27:34.287348    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:27:34.309971    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:27:34.330910    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 10:27:34.350036    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:27:34.370663    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:27:34.390457    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:27:34.410226    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:27:34.431025    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:27:34.451232    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:27:34.471133    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:27:34.487758    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:27:34.493769    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:27:34.506308    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511941    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511996    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.519851    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:27:34.531120    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:27:34.540803    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544302    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544341    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.548724    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:27:34.558817    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:27:34.568088    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571692    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571731    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.575999    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:27:34.585057    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:27:34.588207    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:27:34.588251    4789 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:34.588345    4789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:27:34.601241    4789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:27:34.609838    4789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:27:34.618794    4789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:27:34.627200    4789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:27:34.627208    4789 kubeadm.go:157] found existing configuration files:
	
	I0819 10:27:34.627243    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:27:34.635162    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:27:34.635198    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:27:34.643336    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:27:34.651247    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:27:34.651280    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:27:34.659346    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.667240    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:27:34.667281    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.675386    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:27:34.684053    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:27:34.684105    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:27:34.692357    4789 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 10:27:34.751991    4789 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:27:34.752160    4789 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:27:34.833970    4789 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:27:34.834062    4789 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:27:34.834153    4789 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:27:34.842513    4789 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:27:34.863067    4789 out.go:235]   - Generating certificates and keys ...
	I0819 10:27:34.863126    4789 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:27:34.863179    4789 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:27:35.003012    4789 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:27:35.766829    4789 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:27:35.976153    4789 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:27:36.134850    4789 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:27:36.228947    4789 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:27:36.229166    4789 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.375842    4789 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:27:36.375934    4789 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.597289    4789 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:27:36.907219    4789 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:27:37.426404    4789 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:27:37.426585    4789 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:27:37.566387    4789 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:27:38.000620    4789 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:27:38.121335    4789 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:27:38.179042    4789 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:27:38.231270    4789 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:27:38.231752    4789 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:27:38.233818    4789 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:27:38.255454    4789 out.go:235]   - Booting up control plane ...
	I0819 10:27:38.255535    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:27:38.255605    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:27:38.255655    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:27:38.255734    4789 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:27:38.255809    4789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:27:38.255842    4789 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:27:38.364951    4789 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:27:38.365069    4789 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:27:39.366309    4789 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001984632s
	I0819 10:27:39.366388    4789 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:27:45.029099    4789 kubeadm.go:310] [api-check] The API server is healthy after 5.666724975s
	I0819 10:27:45.039440    4789 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:27:45.046481    4789 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:27:45.059797    4789 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:27:45.059959    4789 kubeadm.go:310] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:27:45.067482    4789 kubeadm.go:310] [bootstrap-token] Using token: rrr6yu.ivgebthw63l7ehzv
	I0819 10:27:45.106820    4789 out.go:235]   - Configuring RBAC rules ...
	I0819 10:27:45.107004    4789 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:27:45.110638    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:27:45.151902    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:27:45.154406    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:27:45.156223    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:27:45.158190    4789 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:27:45.434935    4789 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:27:45.846068    4789 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:27:46.434136    4789 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:27:46.434675    4789 kubeadm.go:310] 
	I0819 10:27:46.434724    4789 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:27:46.434728    4789 kubeadm.go:310] 
	I0819 10:27:46.434798    4789 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:27:46.434808    4789 kubeadm.go:310] 
	I0819 10:27:46.434829    4789 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:27:46.434881    4789 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:27:46.434925    4789 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:27:46.434930    4789 kubeadm.go:310] 
	I0819 10:27:46.434974    4789 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:27:46.434984    4789 kubeadm.go:310] 
	I0819 10:27:46.435035    4789 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:27:46.435041    4789 kubeadm.go:310] 
	I0819 10:27:46.435080    4789 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:27:46.435139    4789 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:27:46.435197    4789 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:27:46.435204    4789 kubeadm.go:310] 
	I0819 10:27:46.435268    4789 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:27:46.435333    4789 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:27:46.435337    4789 kubeadm.go:310] 
	I0819 10:27:46.435410    4789 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435498    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd \
	I0819 10:27:46.435515    4789 kubeadm.go:310] 	--control-plane 
	I0819 10:27:46.435520    4789 kubeadm.go:310] 
	I0819 10:27:46.435589    4789 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:27:46.435594    4789 kubeadm.go:310] 
	I0819 10:27:46.435664    4789 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435746    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd 
	I0819 10:27:46.435997    4789 kubeadm.go:310] W0819 17:27:34.545490    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436229    4789 kubeadm.go:310] W0819 17:27:34.546600    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436316    4789 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:27:46.436331    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:46.436337    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:46.458203    4789 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 10:27:46.517773    4789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 10:27:46.523858    4789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 10:27:46.523872    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 10:27:46.539513    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 10:27:46.759807    4789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:27:46.759878    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:46.759883    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0819 10:27:46.777623    4789 ops.go:34] apiserver oom_adj: -16
	I0819 10:27:46.926523    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.427175    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.927281    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.428033    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.926686    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.426608    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.926666    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:50.010199    4789 kubeadm.go:1113] duration metric: took 3.25030545s to wait for elevateKubeSystemPrivileges
	I0819 10:27:50.010216    4789 kubeadm.go:394] duration metric: took 15.42163041s to StartCluster
	I0819 10:27:50.010227    4789 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.010325    4789 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.010762    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.011021    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:27:50.011033    4789 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.011050    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:27:50.011076    4789 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:27:50.011116    4789 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0819 10:27:50.011120    4789 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0819 10:27:50.011148    4789 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0819 10:27:50.011152    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.011155    4789 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0819 10:27:50.011186    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.011415    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011420    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011430    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.011431    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.020667    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51134
	I0819 10:27:50.021171    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021230    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51136
	I0819 10:27:50.021523    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021533    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.021634    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021753    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.021940    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.022115    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.022146    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.022229    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.022806    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.022988    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.023051    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.024924    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.025156    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:27:50.025529    4789 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:27:50.025699    4789 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0819 10:27:50.025720    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.025937    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.025963    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.031229    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51138
	I0819 10:27:50.031604    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.031942    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.031953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.032154    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.032270    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.032358    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.032435    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.033436    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.034958    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51140
	I0819 10:27:50.035269    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.035586    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.035596    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.035796    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.036148    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.036165    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.044937    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51142
	I0819 10:27:50.045312    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.045667    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.045680    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.045893    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.045996    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.046077    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.046151    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.047102    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.047225    4789 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.047234    4789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:27:50.047243    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.047325    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.047417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.047495    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.047571    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.056055    4789 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:27:50.076134    4789 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.076146    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:27:50.076163    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.076310    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.076417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.076556    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.076664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.113554    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:27:50.127003    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.262022    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.488277    4789 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0819 10:27:50.488318    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488327    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488534    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488547    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488556    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488563    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488564    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488681    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488704    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488718    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488767    4789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:27:50.488780    4789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:27:50.488862    4789 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 10:27:50.488867    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.488877    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.488882    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495057    4789 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:27:50.495477    4789 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 10:27:50.495484    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.495490    4789 round_trippers.go:473]     Content-Type: application/json
	I0819 10:27:50.495494    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.498504    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:27:50.498632    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.498641    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.498797    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.498806    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.498814    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649595    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649607    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.649833    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.649843    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649848    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.649874    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649893    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.650019    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.650028    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.650044    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.673040    4789 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 10:27:50.709732    4789 addons.go:510] duration metric: took 698.654107ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 10:27:50.709774    4789 start.go:246] waiting for cluster config update ...
	I0819 10:27:50.709799    4789 start.go:255] writing updated cluster config ...
	I0819 10:27:50.746763    4789 out.go:201] 
	I0819 10:27:50.768467    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.768565    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.790908    4789 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:27:50.832651    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:50.832673    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:50.832790    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:50.832801    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:50.832852    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.833261    4789 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:50.833314    4789 start.go:364] duration metric: took 41.162µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:27:50.833329    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.833382    4789 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0819 10:27:50.854688    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:50.854833    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.854870    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.864309    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51147
	I0819 10:27:50.864640    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.864951    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.864963    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.865175    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.865294    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:27:50.865374    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:27:50.865472    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:50.865485    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:50.865515    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:50.865553    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865565    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865607    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:50.865634    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865649    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865666    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:50.865676    4789 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0819 10:27:50.865754    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.865776    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:27:50.891966    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:50.891987    4789 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0819 10:27:50.892145    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.892330    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:50.892137    4845 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:50.892421    4789 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:51.078705    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.078630    4845 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa...
	I0819 10:27:51.171843    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.171751    4845 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0819 10:27:51.171860    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0819 10:27:51.171868    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0819 10:27:51.172685    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.172591    4845 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02 ...
	I0819 10:27:51.544884    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.544910    4789 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:27:51.544922    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:27:51.571631    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:27:51.571653    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:51.571680    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571706    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571739    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:51.571767    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:51.571780    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:51.574668    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Pid is 4850
	I0819 10:27:51.575734    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:27:51.575757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.575783    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:51.576702    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:51.576759    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:51.576778    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:51.576816    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:51.576830    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:51.576844    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:51.582262    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:51.590515    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:51.591362    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:51.591388    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:51.591397    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:51.591407    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:51.978930    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:51.978947    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:52.094059    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:52.094091    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:52.094127    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:52.094142    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:52.094869    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:52.094879    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:53.577521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0819 10:27:53.577541    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:53.577636    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:53.578446    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:53.578461    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:53.578472    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:53.578481    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:53.578489    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:53.578507    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:55.579485    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0819 10:27:55.579501    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:55.579576    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:55.580358    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:55.580387    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:55.580414    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:55.580426    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:55.580434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:55.580442    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.581588    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0819 10:27:57.581603    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:57.581681    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:57.582486    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:57.582510    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:57.582521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:57.582530    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:57.582540    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:57.582548    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.680321    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:27:57.680434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:27:57.680445    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:27:57.704982    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:27:59.583757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0819 10:27:59.583772    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:59.583842    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:59.584652    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:59.584696    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:59.584710    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:59.584720    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:59.584729    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:59.584737    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:28:01.585137    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0819 10:28:01.585154    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.585235    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.585996    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:28:01.586042    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:28:01.586055    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:28:01.586080    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:28:01.586086    4789 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:28:01.586098    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:01.586694    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586794    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586889    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:28:01.586896    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:28:01.586980    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.587029    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.587790    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:28:01.587796    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:28:01.587800    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:28:01.587804    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:01.587881    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:01.587956    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588060    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588138    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:01.588256    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:01.588435    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:01.588443    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:28:02.645180    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.645193    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:28:02.645198    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.645326    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.645422    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645501    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645583    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.645718    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.645869    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.645877    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:28:02.700961    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:28:02.700992    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:28:02.700998    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:28:02.701003    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701132    4789 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:28:02.701143    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701237    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.701327    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.701424    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701502    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701588    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.701720    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.701855    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.701864    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:28:02.773500    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:28:02.773515    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.773649    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.773737    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773840    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773945    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.774071    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.774226    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.774237    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:28:02.838956    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.838971    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:28:02.838984    4789 buildroot.go:174] setting up certificates
	I0819 10:28:02.838992    4789 provision.go:84] configureAuth start
	I0819 10:28:02.838998    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.839135    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:02.839223    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.839322    4789 provision.go:143] copyHostCerts
	I0819 10:28:02.839347    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839393    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:28:02.839399    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839532    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:28:02.839738    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839769    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:28:02.839774    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839845    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:28:02.839992    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840021    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:28:02.840025    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840090    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:28:02.840244    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:28:02.878856    4789 provision.go:177] copyRemoteCerts
	I0819 10:28:02.878899    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:28:02.878912    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.879041    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.879132    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.879231    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.879330    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:02.914748    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:28:02.914819    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:28:02.934608    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:28:02.934673    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:28:02.954833    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:28:02.954900    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:28:02.974652    4789 provision.go:87] duration metric: took 135.649275ms to configureAuth
	I0819 10:28:02.974666    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:28:02.974809    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:02.974823    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:02.974958    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.975063    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.975147    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975219    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975328    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.975454    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.975601    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.975609    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:28:03.033628    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:28:03.033639    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:28:03.033715    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:28:03.033730    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.033861    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.033950    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034053    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034140    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.034264    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.034412    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.034459    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:28:03.102644    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:28:03.102663    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.102811    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.102898    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.102999    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.103120    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.103244    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.103390    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.103404    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:28:04.637367    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:28:04.637381    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:28:04.637388    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0819 10:28:04.637524    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:28:04.637530    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:28:04.637534    4789 client.go:171] duration metric: took 13.771742286s to LocalClient.Create
	I0819 10:28:04.637544    4789 start.go:167] duration metric: took 13.771771513s to libmachine.API.Create "ha-431000"
	I0819 10:28:04.637550    4789 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:28:04.637557    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:28:04.637566    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.637712    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:28:04.637723    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.637834    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.637926    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.638026    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.638127    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.678475    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:28:04.682965    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:28:04.682980    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:28:04.683079    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:28:04.683246    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:28:04.683253    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:28:04.683434    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:28:04.695086    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:04.723279    4789 start.go:296] duration metric: took 85.720185ms for postStartSetup
	I0819 10:28:04.723311    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:04.723943    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.724123    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:28:04.724446    4789 start.go:128] duration metric: took 13.890752069s to createHost
	I0819 10:28:04.724460    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.724558    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.724679    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724786    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724871    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.724979    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:04.725097    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:04.725103    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:28:04.784682    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088484.852271103
	
	I0819 10:28:04.784694    4789 fix.go:216] guest clock: 1724088484.852271103
	I0819 10:28:04.784698    4789 fix.go:229] Guest: 2024-08-19 10:28:04.852271103 -0700 PDT Remote: 2024-08-19 10:28:04.724454 -0700 PDT m=+55.319126445 (delta=127.817103ms)
	I0819 10:28:04.784725    4789 fix.go:200] guest clock delta is within tolerance: 127.817103ms
	I0819 10:28:04.784731    4789 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.951104834s
	I0819 10:28:04.784750    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.784884    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.807240    4789 out.go:177] * Found network options:
	I0819 10:28:04.829600    4789 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:28:04.851548    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.851607    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852495    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852747    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852876    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:28:04.852915    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:28:04.852962    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.853080    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:28:04.853100    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.853127    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853372    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853394    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853596    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853633    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853742    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853804    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.853880    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:28:04.886788    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:28:04.886847    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:28:04.931189    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:28:04.931209    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:04.931315    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:04.947443    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:28:04.955693    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:28:04.964155    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:28:04.964197    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:28:04.972493    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.980548    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:28:04.988709    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.996856    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:28:05.005271    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:28:05.013575    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:28:05.021801    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:28:05.030285    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:28:05.037842    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:28:05.045332    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.140730    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:28:05.159555    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:05.159625    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:28:05.177222    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.189624    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:28:05.203743    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.214606    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.224836    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:28:05.249649    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.261132    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:05.276191    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:28:05.279129    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:28:05.287175    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:28:05.300748    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:28:05.396444    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:28:05.505778    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:28:05.505805    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:28:05.520914    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.616215    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:28:07.911303    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.295016426s)
	I0819 10:28:07.911366    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:28:07.923467    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:28:07.938312    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:07.949283    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:28:08.046922    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:28:08.152880    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.256594    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:28:08.271339    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:08.283089    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.384798    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:28:08.441813    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:28:08.441881    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:28:08.446421    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:28:08.446473    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:28:08.449807    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:28:08.479621    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:28:08.479690    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.496571    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.537488    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:28:08.579078    4789 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:28:08.603340    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:08.603786    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:28:08.608372    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:08.618166    4789 mustload.go:65] Loading cluster: ha-431000
	I0819 10:28:08.618314    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:08.618533    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.618549    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.627122    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51170
	I0819 10:28:08.627459    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.627845    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.627857    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.628097    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.628239    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:28:08.628342    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:08.628430    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:28:08.629353    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:08.629592    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.629608    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.638041    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51172
	I0819 10:28:08.638388    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.638753    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.638770    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.638992    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.639108    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:08.639209    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:28:08.639216    4789 certs.go:194] generating shared ca certs ...
	I0819 10:28:08.639225    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.639357    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:28:08.639425    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:28:08.639434    4789 certs.go:256] generating profile certs ...
	I0819 10:28:08.639538    4789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:28:08.639562    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788
	I0819 10:28:08.639575    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0819 10:28:08.693749    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 ...
	I0819 10:28:08.693766    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788: {Name:mkade16cb35e521e9e55fc42d7cb129c8b94b782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694149    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 ...
	I0819 10:28:08.694160    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788: {Name:mkeae0a28d48da45f84299952289f15db5f944f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694378    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:28:08.694703    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:28:08.694954    4789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:28:08.694964    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:28:08.694987    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:28:08.695006    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:28:08.695024    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:28:08.695042    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:28:08.695060    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:28:08.695078    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:28:08.695096    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:28:08.695175    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:28:08.695213    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:28:08.695228    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:28:08.695261    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:28:08.695290    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:28:08.695321    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:28:08.695400    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:08.695438    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:28:08.695462    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:28:08.695482    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:08.695511    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:08.695664    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:08.695745    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:08.695845    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:08.695925    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:08.729193    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:28:08.736181    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:28:08.748665    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:28:08.751826    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:28:08.773481    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:28:08.777252    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:28:08.787546    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:28:08.791015    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:28:08.800105    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:28:08.803218    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:28:08.812240    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:28:08.815351    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:28:08.824083    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:28:08.844052    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:28:08.864107    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:28:08.884612    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:28:08.904284    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 10:28:08.924397    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:28:08.944026    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:28:08.964689    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:28:08.984934    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:28:09.004413    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:28:09.024043    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:28:09.043924    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:28:09.058066    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:28:09.071585    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:28:09.085080    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:28:09.098536    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:28:09.112048    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:28:09.125242    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:28:09.139717    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:28:09.144032    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:28:09.152602    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.155967    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.156009    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.160192    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:28:09.168568    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:28:09.176997    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180533    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180568    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.184799    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:28:09.193356    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:28:09.201811    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205453    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205494    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.209760    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:28:09.218392    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:28:09.222392    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:28:09.222437    4789 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:28:09.222498    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:28:09.222516    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:28:09.222559    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:28:09.234408    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:28:09.234452    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:28:09.234506    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.242939    4789 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:28:09.242994    4789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0819 10:28:09.251336    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 10:28:11.797289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:28:11.809069    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.809192    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.812267    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:28:11.812291    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:28:12.469259    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.469340    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.472845    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:28:12.472869    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:28:13.348737    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.348820    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.352429    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:28:13.352449    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:28:13.542994    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:28:13.550937    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:28:13.564187    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:28:13.577654    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:28:13.591433    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:28:13.594347    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:13.604347    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:13.710422    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:13.730131    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:13.730407    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:13.730448    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:13.739474    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51199
	I0819 10:28:13.739816    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:13.740174    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:13.740190    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:13.740438    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:13.740564    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:13.740661    4789 start.go:317] joinCluster: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:28:13.740750    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 10:28:13.740767    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:13.740857    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:13.740939    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:13.741027    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:13.741101    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:13.815525    4789 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:13.815563    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0819 10:28:41.108330    4789 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (27.292143754s)
	I0819 10:28:41.108351    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 10:28:41.504714    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000-m02 minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=false
	I0819 10:28:41.585348    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-431000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 10:28:41.693283    4789 start.go:319] duration metric: took 27.951997328s to joinCluster
	I0819 10:28:41.693326    4789 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:41.693537    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:41.715528    4789 out.go:177] * Verifying Kubernetes components...
	I0819 10:28:41.790354    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:41.995139    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:42.017369    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:28:42.017608    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:28:42.017650    4789 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:28:42.017827    4789 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:42.017919    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.017925    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.017930    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.017935    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.025432    4789 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 10:28:42.518902    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.518917    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.518923    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.518927    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.521742    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:43.018396    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.018411    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.018417    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.018421    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.021454    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:43.518031    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.518083    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.518106    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.518116    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.522999    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:44.018193    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.018219    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.018231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.018237    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.021854    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:44.022387    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:44.518152    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.518189    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.518196    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.518199    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.520027    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.019772    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.019792    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.019799    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.019803    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.021628    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.518039    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.518053    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.518059    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.518064    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.520113    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:46.018198    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.018232    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.018239    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.018243    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.020136    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.518474    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.518490    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.518496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.518499    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.520505    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.520916    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:47.019124    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.019150    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.019162    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.022729    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:47.518316    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.518341    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.518351    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.518356    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.520471    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:48.019594    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.019620    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.019630    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.019636    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.023447    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:48.518492    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.518526    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.518583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.518593    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.523421    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:48.523787    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:49.019217    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.019242    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.019254    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.019260    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.022862    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:49.520299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.520324    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.520337    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.520342    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.523532    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.019383    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.019412    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.019424    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.019430    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.022847    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.519489    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.519503    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.519511    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.519515    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.522131    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:51.019130    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.019153    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.019163    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.022497    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:51.022894    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:51.518391    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.518448    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.518465    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.518476    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.521848    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.019014    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.019045    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.019103    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.019117    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.022339    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.519630    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.519644    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.519651    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.519655    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.522019    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.018435    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.018460    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.018472    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.018480    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.021850    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:53.518299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.518340    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.518349    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.518355    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.520795    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.521268    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:54.020380    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.020406    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.020418    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.020423    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.024178    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:54.519346    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.519364    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.519383    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.519387    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.521155    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:55.020400    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.020425    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.020437    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.020444    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.024326    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:55.519229    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.519245    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.519264    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.519268    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.521435    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:55.521852    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:56.019678    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.019703    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.019714    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.019719    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.023317    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:56.518539    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.518563    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.518576    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.518581    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.521781    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.020424    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.020449    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.020460    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.020465    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.024114    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.519399    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.519428    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.519468    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.519475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.522788    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.523223    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:58.018734    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.018759    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.018770    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.018777    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.022242    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.518348    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.518359    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.518371    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.518375    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.522907    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.523168    4789 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:28:58.523182    4789 node_ready.go:38] duration metric: took 16.504973252s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:58.523189    4789 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:28:58.523237    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:28:58.523243    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.523249    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.523253    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.528083    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.532699    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.532761    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:28:58.532768    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.532774    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.532776    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.535978    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.536344    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.536351    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.536358    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.536361    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.538061    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.538368    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.538377    4789 pod_ready.go:82] duration metric: took 5.660556ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538383    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538413    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:28:58.538417    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.538423    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.538428    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.540013    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.540457    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.540465    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.540471    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.540475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.542120    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.542393    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.542400    4789 pod_ready.go:82] duration metric: took 4.011453ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542406    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542439    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:28:58.542444    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.542449    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.542454    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.543986    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.544340    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.544347    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.544353    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.544356    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.545868    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.546173    4789 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.546181    4789 pod_ready.go:82] duration metric: took 3.769725ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546187    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546221    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:28:58.546226    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.546231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.546234    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.547638    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.548110    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.548118    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.548123    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.548127    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.549514    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.549853    4789 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.549860    4789 pod_ready.go:82] duration metric: took 3.668598ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.549868    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.718822    4789 request.go:632] Waited for 168.888912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718861    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718867    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.718872    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.718876    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.721032    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:58.919673    4789 request.go:632] Waited for 198.011193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919731    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919740    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.919750    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.919807    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.923236    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.923670    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.923682    4789 pod_ready.go:82] duration metric: took 373.799986ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.923691    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.119399    4789 request.go:632] Waited for 195.629207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119559    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119572    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.119583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.119589    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.122804    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.318619    4789 request.go:632] Waited for 195.030736ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318674    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318695    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.318702    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.318705    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.320812    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.321165    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.321173    4789 pod_ready.go:82] duration metric: took 397.466691ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.321180    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.520541    4789 request.go:632] Waited for 199.292765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520642    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520652    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.520663    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.520672    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.524463    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.718728    4789 request.go:632] Waited for 192.615056ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718803    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718811    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.718818    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.718823    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.720955    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.721397    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.721407    4789 pod_ready.go:82] duration metric: took 400.213219ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.721415    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.918907    4789 request.go:632] Waited for 197.434904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919004    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919014    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.919024    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.919030    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.922451    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.119192    4789 request.go:632] Waited for 196.220574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119263    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119272    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.119286    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.119297    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.122630    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.122957    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.122968    4789 pod_ready.go:82] duration metric: took 401.538458ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.122977    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.320524    4789 request.go:632] Waited for 197.475989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320660    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320672    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.320681    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.320689    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.323985    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.519403    4789 request.go:632] Waited for 194.628597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519535    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519546    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.519560    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.519568    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.523121    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.523435    4789 pod_ready.go:93] pod "kube-proxy-5h7j2" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.523449    4789 pod_ready.go:82] duration metric: took 400.456993ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.523457    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.718666    4789 request.go:632] Waited for 195.15054ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718742    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718752    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.718786    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.718800    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.721920    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.918782    4789 request.go:632] Waited for 196.40919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918873    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918882    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.918896    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.918906    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.922355    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.922815    4789 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.922824    4789 pod_ready.go:82] duration metric: took 399.351509ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.922830    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.118854    4789 request.go:632] Waited for 195.977175ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118950    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118965    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.118981    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.118987    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.122683    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.318886    4789 request.go:632] Waited for 195.887859ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319029    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319042    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.319053    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.319063    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.322689    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.323187    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.323200    4789 pod_ready.go:82] duration metric: took 400.355182ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.323208    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.518928    4789 request.go:632] Waited for 195.662505ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519043    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519057    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.519070    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.519077    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.522736    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.718819    4789 request.go:632] Waited for 195.65197ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718885    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718891    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.718899    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.718905    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.721246    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:29:01.721682    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.721691    4789 pod_ready.go:82] duration metric: took 398.467113ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.721701    4789 pod_ready.go:39] duration metric: took 3.198431164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:29:01.721718    4789 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:29:01.721774    4789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:29:01.735634    4789 api_server.go:72] duration metric: took 20.041851081s to wait for apiserver process to appear ...
	I0819 10:29:01.735647    4789 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:29:01.735663    4789 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:29:01.738815    4789 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:29:01.738848    4789 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:29:01.738854    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.738860    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.738864    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.739526    4789 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:29:01.739580    4789 api_server.go:141] control plane version: v1.31.0
	I0819 10:29:01.739589    4789 api_server.go:131] duration metric: took 3.937962ms to wait for apiserver health ...
	I0819 10:29:01.739594    4789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:29:01.918638    4789 request.go:632] Waited for 178.995687ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918733    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918745    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.918757    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.918762    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.922864    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:01.926606    4789 system_pods.go:59] 17 kube-system pods found
	I0819 10:29:01.926628    4789 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:01.926633    4789 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:01.926636    4789 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:01.926640    4789 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:01.926642    4789 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:01.926645    4789 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:01.926647    4789 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:01.926650    4789 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:01.926653    4789 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:01.926656    4789 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:01.926659    4789 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:01.926666    4789 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:01.926669    4789 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:01.926672    4789 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:01.926674    4789 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:01.926677    4789 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:01.926680    4789 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:01.926684    4789 system_pods.go:74] duration metric: took 187.080965ms to wait for pod list to return data ...
	I0819 10:29:01.926689    4789 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:29:02.119406    4789 request.go:632] Waited for 192.625822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119507    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119517    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.119528    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.119535    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.123120    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.123283    4789 default_sa.go:45] found service account: "default"
	I0819 10:29:02.123293    4789 default_sa.go:55] duration metric: took 196.595366ms for default service account to be created ...
	I0819 10:29:02.123300    4789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:29:02.319795    4789 request.go:632] Waited for 196.43255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319928    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319939    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.319947    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.319954    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.324586    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:02.328058    4789 system_pods.go:86] 17 kube-system pods found
	I0819 10:29:02.328071    4789 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:02.328075    4789 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:02.328078    4789 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:02.328083    4789 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:02.328086    4789 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:02.328088    4789 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:02.328091    4789 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:02.328093    4789 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:02.328096    4789 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:02.328098    4789 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:02.328101    4789 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:02.328103    4789 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:02.328106    4789 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:02.328109    4789 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:02.328112    4789 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:02.328115    4789 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:02.328117    4789 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:02.328122    4789 system_pods.go:126] duration metric: took 204.813151ms to wait for k8s-apps to be running ...
	I0819 10:29:02.328133    4789 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:29:02.328183    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:29:02.340002    4789 system_svc.go:56] duration metric: took 11.865981ms WaitForService to wait for kubelet
	I0819 10:29:02.340017    4789 kubeadm.go:582] duration metric: took 20.646222268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:29:02.340034    4789 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:29:02.518831    4789 request.go:632] Waited for 178.726274ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518969    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518980    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.518991    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.518998    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.522659    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.523326    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523339    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523348    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523351    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523354    4789 node_conditions.go:105] duration metric: took 183.311856ms to run NodePressure ...
	I0819 10:29:02.523361    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:29:02.523378    4789 start.go:255] writing updated cluster config ...
	I0819 10:29:02.544110    4789 out.go:201] 
	I0819 10:29:02.566227    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:02.566358    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.588965    4789 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:29:02.630777    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:29:02.630803    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:29:02.630953    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:29:02.630966    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:29:02.631053    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.631767    4789 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:29:02.631849    4789 start.go:364] duration metric: took 64.609µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:29:02.631869    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:29:02.631978    4789 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0819 10:29:02.652968    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:29:02.653116    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:29:02.653158    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:29:02.663539    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51204
	I0819 10:29:02.663925    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:29:02.664263    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:29:02.664277    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:29:02.664539    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:29:02.664672    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:02.664758    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:02.664867    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:29:02.664899    4789 client.go:168] LocalClient.Create starting
	I0819 10:29:02.664932    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:29:02.664992    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665005    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665051    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:29:02.665087    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665103    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665116    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:29:02.665122    4789 main.go:141] libmachine: (ha-431000-m03) Calling .PreCreateCheck
	I0819 10:29:02.665218    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.665228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:02.674109    4789 main.go:141] libmachine: Creating machine...
	I0819 10:29:02.674126    4789 main.go:141] libmachine: (ha-431000-m03) Calling .Create
	I0819 10:29:02.674302    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.674550    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.674293    4918 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:29:02.674675    4789 main.go:141] libmachine: (ha-431000-m03) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:29:02.956098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.955977    4918 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa...
	I0819 10:29:03.041212    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.041121    4918 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk...
	I0819 10:29:03.041230    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing magic tar header
	I0819 10:29:03.041239    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing SSH key tar header
	I0819 10:29:03.042098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.042003    4918 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03 ...
	I0819 10:29:03.582755    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.582783    4789 main.go:141] libmachine: (ha-431000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:29:03.582846    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:29:03.618942    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:29:03.618960    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:29:03.619021    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619049    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619085    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:29:03.619116    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:29:03.619133    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:29:03.621990    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Pid is 4921
	I0819 10:29:03.622461    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:29:03.622497    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.622585    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:03.623424    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:03.623486    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:03.623500    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:03.623537    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:03.623548    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:03.623558    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:03.623568    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:03.629643    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:29:03.638725    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:29:03.639577    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:03.639599    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:03.639609    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:03.639622    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.022361    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:29:04.022375    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:29:04.137228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:04.137262    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:04.137274    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:04.137284    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.138001    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:29:04.138016    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:29:05.623879    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 1
	I0819 10:29:05.623896    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:05.624023    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:05.624809    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:05.624873    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:05.624888    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:05.624904    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:05.624917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:05.624926    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:05.624935    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:07.626679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 2
	I0819 10:29:07.626696    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:07.626779    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:07.627539    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:07.627582    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:07.627592    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:07.627610    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:07.627619    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:07.627626    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:07.627635    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.627812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 3
	I0819 10:29:09.627828    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:09.627917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:09.628679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:09.628746    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:09.628777    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:09.628791    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:09.628799    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:09.628806    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:09.628812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.722721    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:29:09.722792    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:29:09.722802    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:29:09.745848    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:29:11.630390    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 4
	I0819 10:29:11.630407    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:11.630495    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:11.631275    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:11.631321    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:11.631331    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:11.631340    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:11.631359    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:11.631366    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:11.631387    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:13.633236    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 5
	I0819 10:29:13.633251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.633339    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.634147    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:13.634209    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0819 10:29:13.634221    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:29:13.634228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:29:13.634232    4789 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:29:13.634299    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:13.634943    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635064    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635157    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:29:13.635165    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:29:13.635251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.635310    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.636120    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:29:13.636129    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:29:13.636133    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:29:13.636138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:13.636228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:13.636309    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636392    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636477    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:13.636587    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:13.636755    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:13.636763    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:29:14.697546    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.697558    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:29:14.697564    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.697702    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.697798    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.697887    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.698009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.698168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.698318    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.698326    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:29:14.765778    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:29:14.765827    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:29:14.765833    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:29:14.765839    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.765977    4789 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:29:14.765988    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.766081    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.766185    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.766270    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766369    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766481    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.766635    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.766783    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.766792    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:29:14.841753    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:29:14.841769    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.841901    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.842009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842101    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842195    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.842324    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.842477    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.842489    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:29:14.911764    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.911779    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:29:14.911793    4789 buildroot.go:174] setting up certificates
	I0819 10:29:14.911800    4789 provision.go:84] configureAuth start
	I0819 10:29:14.911807    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.911942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:14.912037    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.912110    4789 provision.go:143] copyHostCerts
	I0819 10:29:14.912141    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912187    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:29:14.912193    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912326    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:29:14.912504    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912534    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:29:14.912539    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912651    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:29:14.912808    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912854    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:29:14.912859    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912935    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:29:14.913083    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:29:15.064390    4789 provision.go:177] copyRemoteCerts
	I0819 10:29:15.064440    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:29:15.064455    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.064599    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.064695    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.064786    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.064886    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:15.103656    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:29:15.103727    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:29:15.123430    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:29:15.123497    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:29:15.143265    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:29:15.143333    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:29:15.162885    4789 provision.go:87] duration metric: took 251.064942ms to configureAuth
	I0819 10:29:15.162900    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:29:15.163052    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:15.163065    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:15.163221    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.163329    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.163417    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163506    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.163693    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.163824    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.163831    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:29:15.225270    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:29:15.225286    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:29:15.225356    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:29:15.225368    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.225510    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.225619    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225708    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225810    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.225948    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.226090    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.226134    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:29:15.299640    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:29:15.299658    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.299797    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.299889    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.299978    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.300067    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.300202    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.300355    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.300368    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:29:16.819930    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:29:16.819945    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:29:16.819953    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetURL
	I0819 10:29:16.820095    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:29:16.820107    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:29:16.820113    4789 client.go:171] duration metric: took 14.154897138s to LocalClient.Create
	I0819 10:29:16.820124    4789 start.go:167] duration metric: took 14.154947877s to libmachine.API.Create "ha-431000"
	I0819 10:29:16.820129    4789 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:29:16.820136    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:29:16.820145    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.820288    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:29:16.820301    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.820396    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.820494    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.820582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.820664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:16.862693    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:29:16.866416    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:29:16.866431    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:29:16.866540    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:29:16.866725    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:29:16.866732    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:29:16.866944    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:29:16.874578    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:29:16.904910    4789 start.go:296] duration metric: took 84.771069ms for postStartSetup
	I0819 10:29:16.904942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:16.905569    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.905740    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:16.906122    4789 start.go:128] duration metric: took 14.273822612s to createHost
	I0819 10:29:16.906138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.906230    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.906303    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906387    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906475    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.906573    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:16.906690    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:16.906697    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:29:16.969389    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088556.958185685
	
	I0819 10:29:16.969401    4789 fix.go:216] guest clock: 1724088556.958185685
	I0819 10:29:16.969406    4789 fix.go:229] Guest: 2024-08-19 10:29:16.958185685 -0700 PDT Remote: 2024-08-19 10:29:16.906131 -0700 PDT m=+127.499217490 (delta=52.054685ms)
	I0819 10:29:16.969416    4789 fix.go:200] guest clock delta is within tolerance: 52.054685ms
	I0819 10:29:16.969419    4789 start.go:83] releasing machines lock for "ha-431000-m03", held for 14.337247496s
	I0819 10:29:16.969437    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.969573    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.992258    4789 out.go:177] * Found network options:
	I0819 10:29:17.014265    4789 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:29:17.037508    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.037542    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.037561    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038432    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038682    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038835    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:29:17.038873    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:29:17.038922    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.038957    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.039067    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:29:17.039087    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:17.039116    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039298    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039332    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039497    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039590    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039679    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039721    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:17.039809    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:29:17.074320    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:29:17.074385    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:29:17.120302    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:29:17.120318    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.120398    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.135851    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:29:17.144402    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:29:17.152735    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.152784    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:29:17.161185    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.169599    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:29:17.177908    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.186319    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:29:17.194967    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:29:17.203702    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:29:17.212228    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:29:17.220632    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:29:17.228164    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:29:17.235717    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.329551    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:29:17.348829    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.348909    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:29:17.363903    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.374976    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:29:17.393061    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.404238    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.414728    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:29:17.438632    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.449143    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.464536    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:29:17.467445    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:29:17.474809    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:29:17.488421    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:29:17.581504    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:29:17.684960    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.684980    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:29:17.699658    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.803979    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:30:18.773891    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.968555005s)
	I0819 10:30:18.774012    4789 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:30:18.808676    4789 out.go:201] 
	W0819 10:30:18.829152    4789 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:29:15 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570013158Z" level=info msg="Starting up"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570447745Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.572542412Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.584880924Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603137975Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603181724Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603219390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603233227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603303033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603338653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603471354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603509282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603521199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603528665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603591360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603811486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605351283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605389063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605504861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605538594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605610859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605677674Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607907354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607976584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607991948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608010711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608023403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608093276Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608724366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608874333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608913351Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608929178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608943960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608968346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609006571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609021660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609032833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609044499Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609055485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609066063Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609088279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609103865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609115537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609130257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609139734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609151164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609161605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609173829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609185591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609200246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609211000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609224200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609237871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609251525Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609296616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609316285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609327369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609362155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609478815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609512436Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609530768Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609541857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609553085Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609563545Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610591556Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610680787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610769049Z" level=info msg="containerd successfully booted in 0.026402s"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.601341697Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.606766805Z" level=info msg="Loading containers: start."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.688780306Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.769433920Z" level=info msg="Loading containers: done."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776749571Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776865122Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.804822251Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.805010917Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:29:16 ha-431000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.814047535Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:29:17 ha-431000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815466623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815881336Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815956644Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.816022765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:18 ha-431000-m03 dockerd[921]: time="2024-08-19T17:29:18.853267859Z" level=info msg="Starting up"
	Aug 19 17:30:18 ha-431000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:30:18.829235    4789 out.go:270] * 
	W0819 10:30:18.830413    4789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:30:18.888275    4789 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621465217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621560978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6d38fc70c811c9647892071fd07ef2e6455806b20e204cd6583df80c81ba64b7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 19 17:30:23 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040175789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040258993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040272849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040810082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:43:15 ha-431000 dockerd[1269]: time="2024-08-19T17:43:15.200751636Z" level=info msg="ignoring event" container=ed733554ed160b888c1f7459530b3d389ee69bed96d213508d208a4f2926cfc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.201258079Z" level=info msg="shim disconnected" id=ed733554ed160b888c1f7459530b3d389ee69bed96d213508d208a4f2926cfc3 namespace=moby
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.201498173Z" level=warning msg="cleaning up after shim disconnected" id=ed733554ed160b888c1f7459530b3d389ee69bed96d213508d208a4f2926cfc3 namespace=moby
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.201540415Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.540578680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.540705518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.540715759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.540887282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.004579691Z" level=info msg="shim disconnected" id=e7cacf032435fe5fd74c9ff947e51071e84739d9cdfb1d3f0b1c3f7f72df50f6 namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1269]: time="2024-08-19T17:43:16.004599876Z" level=info msg="ignoring event" container=e7cacf032435fe5fd74c9ff947e51071e84739d9cdfb1d3f0b1c3f7f72df50f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.004799413Z" level=warning msg="cleaning up after shim disconnected" id=e7cacf032435fe5fd74c9ff947e51071e84739d9cdfb1d3f0b1c3f7f72df50f6 namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.004913234Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.023070076Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:43:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.540369658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.546150369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.546220724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.546357823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e3a7fa32f1ca2       6e38f40d628db                                                                                         30 seconds ago      Running             storage-provisioner       1                   868ee98671e83       storage-provisioner
	73731822fbc4d       38af8ddebf499                                                                                         31 seconds ago      Running             kube-vip                  1                   90ec229d87c2c       kube-vip-ha-431000
	da6e4a61b6cf8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Running             busybox                   0                   6d38fc70c811c       busybox-7dff88458-x7m6m
	b9d1bccf00c94       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   74fd2f09b011a       coredns-6f6b679f8f-hr2qx
	e7cacf032435f       6e38f40d628db                                                                                         15 minutes ago      Exited              storage-provisioner       0                   868ee98671e83       storage-provisioner
	a3891ab602da5       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   c3745c7f8fb9f       coredns-6f6b679f8f-vc76p
	37cd2e9ed2f34       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              15 minutes ago      Running             kindnet-cni               0                   568b6f1ff9aaf       kindnet-lvdbg
	889ab608901bb       ad83b2ca7b09e                                                                                         15 minutes ago      Running             kube-proxy                0                   fde7b27c3d1a5       kube-proxy-5l56s
	ed733554ed160       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     16 minutes ago      Exited              kube-vip                  0                   90ec229d87c2c       kube-vip-ha-431000
	11d9cd3b2f49f       1766f54c897f0                                                                                         16 minutes ago      Running             kube-scheduler            0                   4c252909f338f       kube-scheduler-ha-431000
	262471364c991       604f5db92eaa8                                                                                         16 minutes ago      Running             kube-apiserver            0                   5a0fe916eaf1d       kube-apiserver-ha-431000
	39fe08877284d       2e96e5913fc06                                                                                         16 minutes ago      Running             etcd                      0                   fc30d54d1b565       etcd-ha-431000
	2801f8f44773b       045733566833c                                                                                         16 minutes ago      Running             kube-controller-manager   0                   80d21805f230b       kube-controller-manager-ha-431000
	
	
	==> coredns [a3891ab602da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40841 - 35632 "HINFO IN 8043641794425982319.4992720317295253252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008506209s
	[INFO] 10.244.1.2:51889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132717s
	[INFO] 10.244.1.2:37985 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001601417s
	[INFO] 10.244.1.2:55682 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.007910651s
	[INFO] 10.244.0.4:38616 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.000569215s
	[INFO] 10.244.0.4:47772 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000054313s
	[INFO] 10.244.1.2:49768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135774s
	[INFO] 10.244.1.2:55729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00095124s
	[INFO] 10.244.1.2:38602 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000089444s
	[INFO] 10.244.1.2:52875 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099022s
	[INFO] 10.244.1.2:49308 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063848s
	[INFO] 10.244.0.4:57863 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000064923s
	[INFO] 10.244.0.4:40409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096347s
	[INFO] 10.244.1.2:34617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084305s
	[INFO] 10.244.1.2:55843 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058734s
	[INFO] 10.244.0.4:43213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096675s
	[INFO] 10.244.0.4:44050 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031036s
	[INFO] 10.244.1.2:49077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105574s
	[INFO] 10.244.1.2:57560 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000084227s
	[INFO] 10.244.1.2:40959 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135434s
	
	
	==> coredns [b9d1bccf00c9] <==
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54195 - 29045 "HINFO IN 6513715404119561949.1799819676960271336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.007921235s
	[INFO] 10.244.1.2:45210 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.055498798s
	[INFO] 10.244.0.4:53730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111076s
	[INFO] 10.244.0.4:51704 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000411643s
	[INFO] 10.244.1.2:54559 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088744s
	[INFO] 10.244.1.2:58642 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064137s
	[INFO] 10.244.1.2:34281 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000845538s
	[INFO] 10.244.0.4:53439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000058375s
	[INFO] 10.244.0.4:33951 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106207s
	[INFO] 10.244.0.4:38202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000034691s
	[INFO] 10.244.0.4:46478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119286s
	[INFO] 10.244.0.4:53704 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053613s
	[INFO] 10.244.0.4:42766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051163s
	[INFO] 10.244.1.2:44413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116167s
	[INFO] 10.244.1.2:58453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067066s
	[INFO] 10.244.0.4:37472 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063597s
	[INFO] 10.244.0.4:59559 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033396s
	[INFO] 10.244.1.2:59906 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000120736s
	[INFO] 10.244.0.4:47175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120659s
	[INFO] 10.244.0.4:56722 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121072s
	[INFO] 10.244.0.4:43652 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174608s
	[INFO] 10.244.0.4:32818 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00017028s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: etcdserver: request timed out
	
	
	==> dmesg <==
	[  +2.712596] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.230971] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.519395] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.106046] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.754357] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.260100] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.108326] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.116397] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.050322] kauditd_printk_skb: 139 callbacks suppressed
	[  +2.370658] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.100232] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.114416] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.133019] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.706453] systemd-fstab-generator[1261]: Ignoring "noauto" option for root device
	[  +0.055873] kauditd_printk_skb: 136 callbacks suppressed
	[  +2.542020] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +4.524199] systemd-fstab-generator[1651]: Ignoring "noauto" option for root device
	[  +0.058523] kauditd_printk_skb: 70 callbacks suppressed
	[  +7.145787] systemd-fstab-generator[2146]: Ignoring "noauto" option for root device
	[  +0.090131] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.001426] kauditd_printk_skb: 35 callbacks suppressed
	[Aug19 17:28] kauditd_printk_skb: 15 callbacks suppressed
	[ +36.695422] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [39fe08877284] <==
	{"level":"info","ts":"2024-08-19T17:43:58.986347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T17:43:58.986614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T17:43:58.986783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-19T17:43:58.986883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 3080] sent MsgPreVote request to c22c1f54a3cc7858 at term 2"}
	{"level":"warn","ts":"2024-08-19T17:43:59.325290Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502277735765,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T17:43:59.628177Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c22c1f54a3cc7858","rtt":"6.65857ms","error":"dial tcp 192.169.0.6:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-19T17:43:59.646598Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c22c1f54a3cc7858","rtt":"846.892µs","error":"dial tcp 192.169.0.6:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-19T17:43:59.826119Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502277735765,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-19T17:44:00.286894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T17:44:00.286979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T17:44:00.286999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-19T17:44:00.287020Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 3080] sent MsgPreVote request to c22c1f54a3cc7858 at term 2"}
	{"level":"warn","ts":"2024-08-19T17:44:00.316959Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-08-19T17:44:00.317430Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.370238988s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-19T17:44:00.317867Z","caller":"traceutil/trace.go:171","msg":"trace[1756664454] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; }","duration":"7.370687205s","start":"2024-08-19T17:43:52.947168Z","end":"2024-08-19T17:44:00.317855Z","steps":["trace[1756664454] 'agreement among raft nodes before linearized reading'  (duration: 7.370238643s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:44:00.318052Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:43:52.947048Z","time spent":"7.370989753s","remote":"127.0.0.1:43476","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true "}
	{"level":"warn","ts":"2024-08-19T17:44:00.317456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"11.683881647s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-19T17:44:00.318507Z","caller":"traceutil/trace.go:171","msg":"trace[2056526421] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; }","duration":"11.684899821s","start":"2024-08-19T17:43:48.633556Z","end":"2024-08-19T17:44:00.318456Z","steps":["trace[2056526421] 'agreement among raft nodes before linearized reading'  (duration: 11.683879973s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:44:00.318684Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:43:48.633546Z","time spent":"11.685127682s","remote":"127.0.0.1:43510","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-08-19T17:44:00.317484Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.081283616s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-19T17:44:00.318742Z","caller":"traceutil/trace.go:171","msg":"trace[2076737051] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; }","duration":"8.082543189s","start":"2024-08-19T17:43:52.236194Z","end":"2024-08-19T17:44:00.318737Z","steps":["trace[2076737051] 'agreement among raft nodes before linearized reading'  (duration: 8.081284295s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:44:00.318761Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:43:52.236159Z","time spent":"8.082595448s","remote":"127.0.0.1:43400","response type":"/etcdserverpb.KV/Range","request count":0,"request size":80,"response count":0,"response size":0,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true "}
	{"level":"warn","ts":"2024-08-19T17:44:00.317643Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"13.093796038s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-19T17:44:00.319293Z","caller":"traceutil/trace.go:171","msg":"trace[151529778] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; }","duration":"13.095454113s","start":"2024-08-19T17:43:47.223834Z","end":"2024-08-19T17:44:00.319288Z","steps":["trace[151529778] 'agreement among raft nodes before linearized reading'  (duration: 13.093794548s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:44:00.319589Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:43:47.223811Z","time spent":"13.095768627s","remote":"127.0.0.1:43324","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 "}
	
	
	==> kernel <==
	 17:44:00 up 16 min,  0 users,  load average: 1.84, 0.66, 0.28
	Linux ha-431000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [37cd2e9ed2f3] <==
	I0819 17:43:13.917134       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:43:23.922359       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:43:23.922412       1 main.go:299] handling current node
	I0819 17:43:23.922423       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:43:23.922430       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:43:23.922666       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:43:23.922692       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:43:33.916153       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:43:33.916225       1 main.go:299] handling current node
	I0819 17:43:33.916247       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:43:33.916308       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:43:33.916475       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:43:33.916731       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:43:43.913330       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:43:43.913437       1 main.go:299] handling current node
	I0819 17:43:43.913467       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:43:43.913487       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:43:43.913739       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:43:43.913852       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:43:53.913054       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:43:53.913459       1 main.go:299] handling current node
	I0819 17:43:53.913546       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:43:53.913618       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:43:53.913961       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:43:53.914051       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [262471364c99] <==
	W0819 17:43:53.341497       1 reflector.go:561] storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions: failed to list *apiextensions.CustomResourceDefinition: etcdserver: request timed out
	E0819 17:43:53.341607       1 cacher.go:478] cacher (customresourcedefinitions.apiextensions.k8s.io): unexpected ListAndWatch error: failed to list *apiextensions.CustomResourceDefinition: etcdserver: request timed out; reinitializing...
	W0819 17:43:53.341717       1 reflector.go:561] storage/cacher.go:/leases: failed to list *coordination.Lease: etcdserver: request timed out
	E0819 17:43:53.341765       1 cacher.go:478] cacher (leases.coordination.k8s.io): unexpected ListAndWatch error: failed to list *coordination.Lease: etcdserver: request timed out; reinitializing...
	W0819 17:43:53.341726       1 reflector.go:561] storage/cacher.go:/validatingadmissionpolicies: failed to list *admissionregistration.ValidatingAdmissionPolicy: etcdserver: request timed out
	E0819 17:43:53.341842       1 cacher.go:478] cacher (validatingadmissionpolicies.admissionregistration.k8s.io): unexpected ListAndWatch error: failed to list *admissionregistration.ValidatingAdmissionPolicy: etcdserver: request timed out; reinitializing...
	W0819 17:43:53.341855       1 reflector.go:561] storage/cacher.go:/replicasets: failed to list *apps.ReplicaSet: etcdserver: request timed out
	E0819 17:43:53.342020       1 controller.go:163] "Unhandled Error" err="unable to sync kubernetes service: etcdserver: request timed out" logger="UnhandledError"
	E0819 17:43:53.342025       1 cacher.go:478] cacher (replicasets.apps): unexpected ListAndWatch error: failed to list *apps.ReplicaSet: etcdserver: request timed out; reinitializing...
	W0819 17:43:53.341871       1 reflector.go:561] storage/cacher.go:/horizontalpodautoscalers: failed to list *autoscaling.HorizontalPodAutoscaler: etcdserver: request timed out
	E0819 17:43:53.342351       1 cacher.go:478] cacher (horizontalpodautoscalers.autoscaling): unexpected ListAndWatch error: failed to list *autoscaling.HorizontalPodAutoscaler: etcdserver: request timed out; reinitializing...
	W0819 17:43:53.342131       1 reflector.go:561] storage/cacher.go:/controllers: failed to list *core.ReplicationController: etcdserver: request timed out
	E0819 17:43:53.342440       1 cacher.go:478] cacher (replicationcontrollers): unexpected ListAndWatch error: failed to list *core.ReplicationController: etcdserver: request timed out; reinitializing...
	W0819 17:43:53.342216       1 reflector.go:561] storage/cacher.go:/certificatesigningrequests: failed to list *certificates.CertificateSigningRequest: etcdserver: request timed out
	E0819 17:43:53.342563       1 cacher.go:478] cacher (certificatesigningrequests.certificates.k8s.io): unexpected ListAndWatch error: failed to list *certificates.CertificateSigningRequest: etcdserver: request timed out; reinitializing...
	W0819 17:43:53.341741       1 reflector.go:561] storage/cacher.go:/limitranges: failed to list *core.LimitRange: etcdserver: request timed out
	E0819 17:43:53.342646       1 cacher.go:478] cacher (limitranges): unexpected ListAndWatch error: failed to list *core.LimitRange: etcdserver: request timed out; reinitializing...
	W0819 17:43:53.342235       1 reflector.go:561] storage/cacher.go:/persistentvolumes: failed to list *core.PersistentVolume: etcdserver: request timed out
	E0819 17:43:53.342785       1 cacher.go:478] cacher (persistentvolumes): unexpected ListAndWatch error: failed to list *core.PersistentVolume: etcdserver: request timed out; reinitializing...
	E0819 17:43:57.631501       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E0819 17:43:57.633635       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0819 17:43:57.635027       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0819 17:43:57.636411       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0819 17:43:57.637703       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.307861ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	E0819 17:44:00.320300       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	
	
	==> kube-controller-manager [2801f8f44773] <==
	I0819 17:42:30.304809       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:30.365012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.195µs"
	I0819 17:42:32.043252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:33.778806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:33.779606       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-431000-m04"
	I0819 17:42:33.857848       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:39.645314       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:52.547283       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:52.548660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-431000-m04"
	I0819 17:42:52.555756       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:52.559687       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.641µs"
	I0819 17:42:52.568999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.897µs"
	I0819 17:42:52.574921       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.923µs"
	I0819 17:42:53.790919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:42:54.429233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.87659ms"
	I0819 17:42:54.429711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.036µs"
	I0819 17:43:00.100260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:43:42.533210       1 request.go:700] Waited for 1.03993092s, retries: 2, retry-after: 1s - retry-reason: due to server-side throttling, FlowSchema UID: "6fb80af4-8bca-46e7-ad3a-5028f0da03c7" - request: GET:https://192.169.0.5:8443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=2649&timeout=6m25s&timeoutSeconds=385&watch=true
	E0819 17:43:45.807946       1 node_lifecycle_controller.go:978] "Error updating node" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" logger="node-lifecycle-controller" node="ha-431000-m02"
	E0819 17:43:45.807963       1 node_lifecycle_controller.go:978] "Error updating node" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" logger="node-lifecycle-controller" node="ha-431000"
	I0819 17:43:52.529857       1 request.go:700] Waited for 1.29883726s, retries: 6, retry-after: 1s - retry-reason: due to server-side throttling, FlowSchema UID: "6fb80af4-8bca-46e7-ad3a-5028f0da03c7" - request: GET:https://192.169.0.5:8443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=2632&timeout=6m43s&timeoutSeconds=403&watch=true
	E0819 17:43:53.330386       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-431000"
	E0819 17:43:53.331950       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="etcdserver: request timed out" logger="node-lifecycle-controller" node=""
	E0819 17:43:53.330952       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-431000-m02"
	E0819 17:43:53.332073       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="etcdserver: request timed out" logger="node-lifecycle-controller" node=""
	
	
	==> kube-proxy [889ab608901b] <==
	E0819 17:27:50.162614       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:27:50.171417       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0819 17:27:50.171450       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:27:50.239161       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:27:50.239202       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:27:50.239220       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:27:50.242102       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:27:50.242306       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:27:50.242335       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:27:50.253458       1 config.go:197] "Starting service config controller"
	I0819 17:27:50.253497       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:27:50.253518       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:27:50.253542       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:27:50.253889       1 config.go:326] "Starting node config controller"
	I0819 17:27:50.253915       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:27:50.354735       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:27:50.354788       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:27:50.354817       1 shared_informer.go:320] Caches are synced for endpoint slice config
	W0819 17:44:00.032646       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0819 17:44:00.032656       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0819 17:44:00.032892       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [11d9cd3b2f49] <==
	W0819 17:27:42.867998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:27:42.868077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.900445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:27:42.900541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.970545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:27:42.970765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:43.004003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:27:43.004103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:27:43.339820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:30:22.272037       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	E0819 17:30:22.273195       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e37fe27d-f1bf-427d-a76d-96722b0c74a1(default/busybox-7dff88458-x7m6m) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x7m6m"
	E0819 17:30:22.273433       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" pod="default/busybox-7dff88458-x7m6m"
	I0819 17:30:22.273582       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	E0819 17:42:29.626807       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kcrzx\": pod kindnet-kcrzx is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kcrzx" node="ha-431000-m04"
	E0819 17:42:29.626857       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4d8e74ea-456c-476b-951f-c880eb642788(kube-system/kindnet-kcrzx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kcrzx"
	E0819 17:42:29.626868       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kcrzx\": pod kindnet-kcrzx is already assigned to node \"ha-431000-m04\"" pod="kube-system/kindnet-kcrzx"
	I0819 17:42:29.626879       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kcrzx" node="ha-431000-m04"
	E0819 17:42:29.628487       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2fn5w\": pod kube-proxy-2fn5w is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2fn5w" node="ha-431000-m04"
	E0819 17:42:29.628792       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bca1b722-fe85-4f4b-a536-8228357812a4(kube-system/kube-proxy-2fn5w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2fn5w"
	E0819 17:42:29.628962       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2fn5w\": pod kube-proxy-2fn5w is already assigned to node \"ha-431000-m04\"" pod="kube-system/kube-proxy-2fn5w"
	I0819 17:42:29.629175       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2fn5w" node="ha-431000-m04"
	E0819 17:42:52.562727       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wfcpq\": pod busybox-7dff88458-wfcpq is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wfcpq" node="ha-431000-m04"
	E0819 17:42:52.562826       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c7d1dd4a-aba7-4c8f-be2e-0dc5cdb4faf7(default/busybox-7dff88458-wfcpq) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wfcpq"
	E0819 17:42:52.562855       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wfcpq\": pod busybox-7dff88458-wfcpq is already assigned to node \"ha-431000-m04\"" pod="default/busybox-7dff88458-wfcpq"
	I0819 17:42:52.562878       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wfcpq" node="ha-431000-m04"
	
	
	==> kubelet <==
	Aug 19 17:43:35 ha-431000 kubelet[2153]: E0819 17:43:35.931748    2153 container_log_manager.go:307] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-431000_4be26ba36a583cb5cf787c7b12260cd6/kube-apiserver/0.log\": failed to reopen container log \"262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd" path="/var/log/pods/kube-system_kube-apiserver-ha-431000_4be26ba36a583cb5cf787c7b12260cd6/kube-apiserver/0.log" currentSize=23068554 maxSize=10485760
	Aug 19 17:43:41 ha-431000 kubelet[2153]: E0819 17:43:41.714802    2153 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-431000?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 19 17:43:45 ha-431000 kubelet[2153]: E0819 17:43:45.526381    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:43:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:43:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:43:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:43:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:43:45 ha-431000 kubelet[2153]: E0819 17:43:45.938612    2153 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd"
	Aug 19 17:43:45 ha-431000 kubelet[2153]: E0819 17:43:45.938741    2153 container_log_manager.go:307] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-431000_4be26ba36a583cb5cf787c7b12260cd6/kube-apiserver/0.log\": failed to reopen container log \"262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd" path="/var/log/pods/kube-system_kube-apiserver-ha-431000_4be26ba36a583cb5cf787c7b12260cd6/kube-apiserver/0.log" currentSize=25321134 maxSize=10485760
	Aug 19 17:43:51 ha-431000 kubelet[2153]: E0819 17:43:51.716592    2153 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-431000?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 19 17:43:51 ha-431000 kubelet[2153]: I0819 17:43:51.717089    2153 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Aug 19 17:43:55 ha-431000 kubelet[2153]: E0819 17:43:55.942745    2153 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd"
	Aug 19 17:43:55 ha-431000 kubelet[2153]: E0819 17:43:55.942812    2153 container_log_manager.go:307] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-431000_4be26ba36a583cb5cf787c7b12260cd6/kube-apiserver/0.log\": failed to reopen container log \"262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd" path="/var/log/pods/kube-system_kube-apiserver-ha-431000_4be26ba36a583cb5cf787c7b12260cd6/kube-apiserver/0.log" currentSize=25349571 maxSize=10485760
	Aug 19 17:43:58 ha-431000 kubelet[2153]: W0819 17:43:58.723759    2153 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 19 17:43:58 ha-431000 kubelet[2153]: E0819 17:43:58.723842    2153 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-431000.17ed32299fbaf8bc\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-ha-431000.17ed32299fbaf8bc  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-431000,UID:4be26ba36a583cb5cf787c7b12260cd6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-431000,},FirstTimestamp:2024-08-19 17:43:06.707646652 +0000 UTC m=+921.301345273,LastTimestamp:2024-08-19 17:43:10.714412846 +0000 UTC m=+925.308111459,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:n
il,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-431000,}"
	Aug 19 17:43:58 ha-431000 kubelet[2153]: W0819 17:43:58.723913    2153 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 19 17:43:58 ha-431000 kubelet[2153]: W0819 17:43:58.723932    2153 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 19 17:43:58 ha-431000 kubelet[2153]: I0819 17:43:58.723959    2153 status_manager.go:851] "Failed to get status for pod" podUID="4be26ba36a583cb5cf787c7b12260cd6" pod="kube-system/kube-apiserver-ha-431000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000\": http2: client connection lost"
	Aug 19 17:43:58 ha-431000 kubelet[2153]: E0819 17:43:58.724151    2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-431000?timeout=10s\": http2: client connection lost" interval="200ms"
	Aug 19 17:43:58 ha-431000 kubelet[2153]: W0819 17:43:58.724309    2153 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 19 17:43:58 ha-431000 kubelet[2153]: W0819 17:43:58.724367    2153 reflector.go:484] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 19 17:43:58 ha-431000 kubelet[2153]: W0819 17:43:58.724412    2153 reflector.go:484] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 19 17:43:58 ha-431000 kubelet[2153]: W0819 17:43:58.724452    2153 reflector.go:484] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 19 17:43:58 ha-431000 kubelet[2153]: W0819 17:43:58.724491    2153 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 19 17:43:58 ha-431000 kubelet[2153]: W0819 17:43:58.723769    2153 reflector.go:484] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000: exit status 2 (15.648719731s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-431000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (73.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (49.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (14.829748936s)
ha_test.go:413: expected profile "ha-431000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-431000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-431000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-431000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false
,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\
",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000: exit status 2 (15.882089349s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (1.998066351s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT |                     |
	|         | busybox-7dff88458-wfcpq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-431000 -v=7                | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-431000 node stop m02 -v=7         | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:43 PDT | 19 Aug 24 10:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:27:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:27:09.441458    4789 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:27:09.441716    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441721    4789 out.go:358] Setting ErrFile to fd 2...
	I0819 10:27:09.441725    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441914    4789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:27:09.443405    4789 out.go:352] Setting JSON to false
	I0819 10:27:09.468451    4789 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3399,"bootTime":1724085030,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:27:09.468547    4789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:27:09.554597    4789 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:27:09.577770    4789 notify.go:220] Checking for updates...
	I0819 10:27:09.609734    4789 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:27:09.676944    4789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:09.699980    4789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:27:09.722951    4789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:27:09.744804    4789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.765726    4789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:27:09.787204    4789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:27:09.817679    4789 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 10:27:09.859821    4789 start.go:297] selected driver: hyperkit
	I0819 10:27:09.859849    4789 start.go:901] validating driver "hyperkit" against <nil>
	I0819 10:27:09.859893    4789 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:27:09.864287    4789 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.864395    4789 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:27:09.872759    4789 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:27:09.876743    4789 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.876768    4789 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:27:09.876803    4789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:27:09.877011    4789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:27:09.877072    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:09.877082    4789 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 10:27:09.877094    4789 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:27:09.877164    4789 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:09.877251    4789 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.919755    4789 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:27:09.940604    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:09.940675    4789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:27:09.940720    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:09.940918    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:09.940931    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:09.941271    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:09.941299    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json: {Name:mkf9dcbb24d8b9fbe62d81f81a7a87fec457d2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:09.941835    4789 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:09.941963    4789 start.go:364] duration metric: took 95.166µs to acquireMachinesLock for "ha-431000"
	I0819 10:27:09.941997    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:09.942082    4789 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 10:27:09.963791    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:09.964075    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.964148    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:09.974068    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51111
	I0819 10:27:09.974512    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:09.974919    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:09.974932    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:09.975172    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:09.975283    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:09.975374    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:09.975471    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:09.975492    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:09.975527    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:09.975578    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975594    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975657    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:09.975695    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975707    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975719    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:09.975729    4789 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0819 10:27:09.975800    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.975970    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:09.976388    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:09.976397    4789 main.go:141] libmachine: (ha-431000) Calling .Create
	I0819 10:27:09.976462    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.976580    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:09.976459    4799 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.976633    4789 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:10.160305    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.160220    4799 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa...
	I0819 10:27:10.258779    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.258678    4799 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0819 10:27:10.258792    4789 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0819 10:27:10.258800    4789 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0819 10:27:10.259681    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.259588    4799 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000 ...
	I0819 10:27:10.634434    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.634476    4789 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:27:10.634529    4789 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:27:10.744945    4789 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:27:10.744966    4789 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:10.744993    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745030    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745065    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:10.745094    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:10.745118    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:10.748020    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Pid is 4802
	I0819 10:27:10.748404    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:27:10.748413    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.748494    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:10.749357    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:10.749398    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:10.749412    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:10.749423    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:10.749431    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:10.755634    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:10.806699    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:10.807300    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:10.807314    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:10.807322    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:10.807335    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.184562    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:11.184575    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:11.299194    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:11.299213    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:11.299228    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:11.299236    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.300075    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:11.300086    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:12.750038    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0819 10:27:12.750054    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:12.750189    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:12.750969    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:12.751019    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:12.751030    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:12.751039    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:12.751052    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:14.752158    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0819 10:27:14.752174    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:14.752264    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:14.753040    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:14.753090    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:14.753102    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:14.753111    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:14.753117    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.754325    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0819 10:27:16.754340    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:16.754402    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:16.755326    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:16.755347    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:16.755354    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:16.755373    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:16.755390    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.856153    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:27:16.856252    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:27:16.856262    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:27:16.880804    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:27:18.757489    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0819 10:27:18.757504    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:18.757601    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:18.758394    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:18.758435    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:18.758449    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:18.758481    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:18.758495    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:20.758927    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0819 10:27:20.758946    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.759035    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.759848    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:20.759873    4789 main.go:141] libmachine: (ha-431000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:20.759888    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:20.759901    4789 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:27:20.759913    4789 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:27:20.759952    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:20.760523    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760634    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760741    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:27:20.760753    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:20.760839    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.760885    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.761678    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:27:20.761690    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:27:20.761696    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:27:20.761702    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:20.761795    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:20.761883    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.761969    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.762060    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:20.762168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:20.762361    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:20.762369    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:27:21.818394    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.818406    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:27:21.818419    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.818554    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.818654    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818747    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818841    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.818981    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.819131    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.819139    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:27:21.870784    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:27:21.870826    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:27:21.870831    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:27:21.870837    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.870976    4789 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:27:21.870986    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.871077    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.871169    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.871272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871352    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871452    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.871577    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.871711    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.871719    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:27:21.937676    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:27:21.937694    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.937826    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.937927    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938017    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938112    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.938245    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.938391    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.938402    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:27:21.996654    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.996676    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:27:21.996692    4789 buildroot.go:174] setting up certificates
	I0819 10:27:21.996701    4789 provision.go:84] configureAuth start
	I0819 10:27:21.996714    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.996873    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:21.996990    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.997094    4789 provision.go:143] copyHostCerts
	I0819 10:27:21.997133    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997201    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:27:21.997209    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997337    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:27:21.997534    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997567    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:27:21.997572    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997714    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:27:21.997882    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.997926    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:27:21.997941    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.998049    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:27:21.998203    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:27:22.044837    4789 provision.go:177] copyRemoteCerts
	I0819 10:27:22.044896    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:27:22.044908    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.045021    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.045107    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.045191    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.045288    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:22.078701    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:27:22.078779    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:27:22.098027    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:27:22.098092    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 10:27:22.117169    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:27:22.117235    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:27:22.137411    4789 provision.go:87] duration metric: took 140.68689ms to configureAuth
	I0819 10:27:22.137424    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:27:22.137558    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:22.137574    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:22.137700    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.137783    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.137859    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.137942    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.138028    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.138134    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.138266    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.138274    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:27:22.191384    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:27:22.191397    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:27:22.191469    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:27:22.191481    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.191636    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.191724    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191834    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191924    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.192051    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.192193    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.192236    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:27:22.256138    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:27:22.256165    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.256301    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.256391    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256475    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256578    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.256695    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.256839    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.256851    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:27:23.816844    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:27:23.816860    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:27:23.816871    4789 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0819 10:27:23.817008    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:27:23.817016    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:27:23.817020    4789 client.go:171] duration metric: took 13.841219093s to LocalClient.Create
	I0819 10:27:23.817036    4789 start.go:167] duration metric: took 13.84126124s to libmachine.API.Create "ha-431000"
	I0819 10:27:23.817044    4789 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:27:23.817051    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:27:23.817063    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.817219    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:27:23.817232    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.817321    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.817402    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.817497    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.817595    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.852993    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:27:23.857771    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:27:23.857792    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:27:23.857909    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:27:23.858094    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:27:23.858100    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:27:23.858323    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:27:23.868639    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:23.894485    4789 start.go:296] duration metric: took 77.430316ms for postStartSetup
	I0819 10:27:23.894509    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:23.895099    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.895256    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:23.895585    4789 start.go:128] duration metric: took 13.953185373s to createHost
	I0819 10:27:23.895598    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.895691    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.895790    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895879    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895966    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.896069    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:23.896228    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:23.896236    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:27:23.956133    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088443.744394113
	
	I0819 10:27:23.956145    4789 fix.go:216] guest clock: 1724088443.744394113
	I0819 10:27:23.956151    4789 fix.go:229] Guest: 2024-08-19 10:27:23.744394113 -0700 PDT Remote: 2024-08-19 10:27:23.895593 -0700 PDT m=+14.491162031 (delta=-151.198887ms)
	I0819 10:27:23.956169    4789 fix.go:200] guest clock delta is within tolerance: -151.198887ms
	I0819 10:27:23.956173    4789 start.go:83] releasing machines lock for "ha-431000", held for 14.013893151s
	I0819 10:27:23.956192    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956322    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.956416    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956749    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956860    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956951    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:27:23.956980    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957023    4789 ssh_runner.go:195] Run: cat /version.json
	I0819 10:27:23.957036    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957073    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957109    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957170    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957184    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957292    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957350    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.957384    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:24.032926    4789 ssh_runner.go:195] Run: systemctl --version
	I0819 10:27:24.037723    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:27:24.041939    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:27:24.041985    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:27:24.055424    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:27:24.055435    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.055529    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.070257    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:27:24.079169    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:27:24.088264    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.088319    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:27:24.097172    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.105902    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:27:24.114585    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.123406    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:27:24.132626    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:27:24.141378    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:27:24.150490    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:27:24.158980    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:27:24.167068    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:27:24.175030    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.269460    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:27:24.289328    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.289405    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:27:24.304907    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.317291    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:27:24.330289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.340851    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.351456    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:27:24.376914    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.387402    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.402522    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:27:24.405426    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:27:24.412799    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:27:24.426019    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:27:24.528550    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:27:24.636829    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.636893    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:27:24.652027    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.753641    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:27.037286    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.283575266s)
	I0819 10:27:27.037346    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:27:27.047775    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:27:27.062961    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.074027    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:27:27.172330    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:27:27.284593    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.395779    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:27:27.409552    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.420868    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.532356    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:27:27.591558    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:27:27.591636    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:27:27.595967    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:27:27.596013    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:27:27.599275    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:27:27.625101    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:27:27.625173    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.642636    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.693299    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:27:27.693355    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:27.693783    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:27:27.698129    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:27.708916    4789 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:27:27.708982    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:27.709038    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:27.721971    4789 docker.go:685] Got preloaded images: 
	I0819 10:27:27.721984    4789 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0819 10:27:27.722034    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:27.730353    4789 ssh_runner.go:195] Run: which lz4
	I0819 10:27:27.733218    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 10:27:27.733323    4789 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 10:27:27.736425    4789 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 10:27:27.736445    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0819 10:27:28.750864    4789 docker.go:649] duration metric: took 1.017557348s to copy over tarball
	I0819 10:27:28.750956    4789 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 10:27:31.074672    4789 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.323648699s)
	I0819 10:27:31.074688    4789 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 10:27:31.100633    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:31.109680    4789 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0819 10:27:31.123335    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:31.234501    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:33.578614    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.344043512s)
	I0819 10:27:33.578701    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:33.592021    4789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 10:27:33.592040    4789 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:27:33.592048    4789 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:27:33.592132    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:27:33.592198    4789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:27:33.629283    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:33.629295    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:33.629309    4789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:27:33.629329    4789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:27:33.629424    4789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:27:33.629439    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:27:33.629491    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:27:33.642904    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:27:33.642969    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:27:33.643018    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:27:33.652008    4789 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:27:33.652070    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:27:33.660066    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:27:33.673571    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:27:33.686700    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:27:33.700085    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0819 10:27:33.713804    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:27:33.716661    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:33.726684    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:33.822205    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:27:33.836833    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:27:33.836844    4789 certs.go:194] generating shared ca certs ...
	I0819 10:27:33.836855    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.837051    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:27:33.837132    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:27:33.837142    4789 certs.go:256] generating profile certs ...
	I0819 10:27:33.837189    4789 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:27:33.837203    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0819 10:27:33.888319    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt ...
	I0819 10:27:33.888333    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt: {Name:mk2ecc34873277fbe11bf267ec0d97684e18e84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888666    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key ...
	I0819 10:27:33.888675    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key: {Name:mk51abee214c838f4621902241303fe73ba93aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888900    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e
	I0819 10:27:33.888915    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0819 10:27:34.060027    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e ...
	I0819 10:27:34.060046    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e: {Name:mk108eb9cf88ab2aae15883e4a3724751adb3118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060347    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e ...
	I0819 10:27:34.060356    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e: {Name:mk8fae11cce9c9a45d3e151953d1ee9ab2cc82d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060557    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:27:34.060759    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:27:34.060929    4789 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:27:34.060943    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0819 10:27:34.243675    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0819 10:27:34.243690    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mkeb1eac7ee8b3901067565b7ff883710f2d1088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244061    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key ...
	I0819 10:27:34.244069    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key: {Name:mkc1afcd7a6a9a572716155e33c32e7def81650b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244312    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:27:34.244340    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:27:34.244378    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:27:34.244398    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:27:34.244416    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:27:34.244448    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:27:34.244486    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:27:34.244521    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:27:34.244615    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:27:34.244666    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:27:34.244675    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:27:34.244748    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:27:34.244776    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:27:34.244831    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:27:34.244909    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:34.244942    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.244990    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.245007    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.245522    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:27:34.267677    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:27:34.287348    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:27:34.309971    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:27:34.330910    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 10:27:34.350036    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:27:34.370663    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:27:34.390457    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:27:34.410226    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:27:34.431025    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:27:34.451232    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:27:34.471133    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:27:34.487758    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:27:34.493769    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:27:34.506308    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511941    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511996    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.519851    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:27:34.531120    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:27:34.540803    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544302    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544341    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.548724    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:27:34.558817    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:27:34.568088    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571692    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571731    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.575999    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:27:34.585057    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:27:34.588207    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:27:34.588251    4789 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:34.588345    4789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:27:34.601241    4789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:27:34.609838    4789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:27:34.618794    4789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:27:34.627200    4789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:27:34.627208    4789 kubeadm.go:157] found existing configuration files:
	
	I0819 10:27:34.627243    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:27:34.635162    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:27:34.635198    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:27:34.643336    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:27:34.651247    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:27:34.651280    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:27:34.659346    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.667240    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:27:34.667281    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.675386    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:27:34.684053    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:27:34.684105    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:27:34.692357    4789 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 10:27:34.751991    4789 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:27:34.752160    4789 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:27:34.833970    4789 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:27:34.834062    4789 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:27:34.834153    4789 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:27:34.842513    4789 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:27:34.863067    4789 out.go:235]   - Generating certificates and keys ...
	I0819 10:27:34.863126    4789 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:27:34.863179    4789 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:27:35.003012    4789 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:27:35.766829    4789 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:27:35.976153    4789 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:27:36.134850    4789 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:27:36.228947    4789 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:27:36.229166    4789 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.375842    4789 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:27:36.375934    4789 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.597289    4789 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:27:36.907219    4789 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:27:37.426404    4789 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:27:37.426585    4789 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:27:37.566387    4789 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:27:38.000620    4789 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:27:38.121335    4789 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:27:38.179042    4789 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:27:38.231270    4789 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:27:38.231752    4789 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:27:38.233818    4789 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:27:38.255454    4789 out.go:235]   - Booting up control plane ...
	I0819 10:27:38.255535    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:27:38.255605    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:27:38.255655    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:27:38.255734    4789 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:27:38.255809    4789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:27:38.255842    4789 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:27:38.364951    4789 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:27:38.365069    4789 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:27:39.366309    4789 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001984632s
	I0819 10:27:39.366388    4789 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:27:45.029099    4789 kubeadm.go:310] [api-check] The API server is healthy after 5.666724975s
	I0819 10:27:45.039440    4789 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:27:45.046481    4789 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:27:45.059797    4789 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:27:45.059959    4789 kubeadm.go:310] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:27:45.067482    4789 kubeadm.go:310] [bootstrap-token] Using token: rrr6yu.ivgebthw63l7ehzv
	I0819 10:27:45.106820    4789 out.go:235]   - Configuring RBAC rules ...
	I0819 10:27:45.107004    4789 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:27:45.110638    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:27:45.151902    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:27:45.154406    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:27:45.156223    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:27:45.158190    4789 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:27:45.434935    4789 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:27:45.846068    4789 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:27:46.434136    4789 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:27:46.434675    4789 kubeadm.go:310] 
	I0819 10:27:46.434724    4789 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:27:46.434728    4789 kubeadm.go:310] 
	I0819 10:27:46.434798    4789 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:27:46.434808    4789 kubeadm.go:310] 
	I0819 10:27:46.434829    4789 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:27:46.434881    4789 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:27:46.434925    4789 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:27:46.434930    4789 kubeadm.go:310] 
	I0819 10:27:46.434974    4789 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:27:46.434984    4789 kubeadm.go:310] 
	I0819 10:27:46.435035    4789 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:27:46.435041    4789 kubeadm.go:310] 
	I0819 10:27:46.435080    4789 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:27:46.435139    4789 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:27:46.435197    4789 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:27:46.435204    4789 kubeadm.go:310] 
	I0819 10:27:46.435268    4789 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:27:46.435333    4789 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:27:46.435337    4789 kubeadm.go:310] 
	I0819 10:27:46.435410    4789 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435498    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd \
	I0819 10:27:46.435515    4789 kubeadm.go:310] 	--control-plane 
	I0819 10:27:46.435520    4789 kubeadm.go:310] 
	I0819 10:27:46.435589    4789 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:27:46.435594    4789 kubeadm.go:310] 
	I0819 10:27:46.435664    4789 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435746    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd 
	I0819 10:27:46.435997    4789 kubeadm.go:310] W0819 17:27:34.545490    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436229    4789 kubeadm.go:310] W0819 17:27:34.546600    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436316    4789 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:27:46.436331    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:46.436337    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:46.458203    4789 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 10:27:46.517773    4789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 10:27:46.523858    4789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 10:27:46.523872    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 10:27:46.539513    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 10:27:46.759807    4789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:27:46.759878    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:46.759883    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0819 10:27:46.777623    4789 ops.go:34] apiserver oom_adj: -16
	I0819 10:27:46.926523    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.427175    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.927281    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.428033    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.926686    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.426608    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.926666    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:50.010199    4789 kubeadm.go:1113] duration metric: took 3.25030545s to wait for elevateKubeSystemPrivileges
	I0819 10:27:50.010216    4789 kubeadm.go:394] duration metric: took 15.42163041s to StartCluster
	I0819 10:27:50.010227    4789 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.010325    4789 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.010762    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.011021    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:27:50.011033    4789 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.011050    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:27:50.011076    4789 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:27:50.011116    4789 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0819 10:27:50.011120    4789 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0819 10:27:50.011148    4789 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0819 10:27:50.011152    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.011155    4789 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0819 10:27:50.011186    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.011415    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011420    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011430    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.011431    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.020667    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51134
	I0819 10:27:50.021171    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021230    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51136
	I0819 10:27:50.021523    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021533    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.021634    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021753    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.021940    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.022115    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.022146    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.022229    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.022806    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.022988    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.023051    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.024924    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.025156    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:27:50.025529    4789 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:27:50.025699    4789 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0819 10:27:50.025720    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.025937    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.025963    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.031229    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51138
	I0819 10:27:50.031604    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.031942    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.031953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.032154    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.032270    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.032358    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.032435    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.033436    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.034958    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51140
	I0819 10:27:50.035269    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.035586    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.035596    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.035796    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.036148    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.036165    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.044937    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51142
	I0819 10:27:50.045312    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.045667    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.045680    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.045893    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.045996    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.046077    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.046151    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.047102    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.047225    4789 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.047234    4789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:27:50.047243    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.047325    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.047417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.047495    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.047571    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.056055    4789 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:27:50.076134    4789 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.076146    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:27:50.076163    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.076310    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.076417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.076556    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.076664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.113554    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:27:50.127003    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.262022    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.488277    4789 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0819 10:27:50.488318    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488327    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488534    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488547    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488556    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488563    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488564    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488681    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488704    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488718    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488767    4789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:27:50.488780    4789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:27:50.488862    4789 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 10:27:50.488867    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.488877    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.488882    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495057    4789 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:27:50.495477    4789 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 10:27:50.495484    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.495490    4789 round_trippers.go:473]     Content-Type: application/json
	I0819 10:27:50.495494    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.498504    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:27:50.498632    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.498641    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.498797    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.498806    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.498814    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649595    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649607    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.649833    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.649843    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649848    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.649874    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649893    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.650019    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.650028    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.650044    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.673040    4789 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 10:27:50.709732    4789 addons.go:510] duration metric: took 698.654107ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 10:27:50.709774    4789 start.go:246] waiting for cluster config update ...
	I0819 10:27:50.709799    4789 start.go:255] writing updated cluster config ...
	I0819 10:27:50.746763    4789 out.go:201] 
	I0819 10:27:50.768467    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.768565    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.790908    4789 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:27:50.832651    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:50.832673    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:50.832790    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:50.832801    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:50.832852    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.833261    4789 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:50.833314    4789 start.go:364] duration metric: took 41.162µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:27:50.833329    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.833382    4789 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0819 10:27:50.854688    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:50.854833    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.854870    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.864309    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51147
	I0819 10:27:50.864640    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.864951    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.864963    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.865175    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.865294    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:27:50.865374    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:27:50.865472    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:50.865485    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:50.865515    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:50.865553    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865565    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865607    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:50.865634    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865649    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865666    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:50.865676    4789 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0819 10:27:50.865754    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.865776    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:27:50.891966    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:50.891987    4789 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0819 10:27:50.892145    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.892330    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:50.892137    4845 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:50.892421    4789 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:51.078705    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.078630    4845 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa...
	I0819 10:27:51.171843    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.171751    4845 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0819 10:27:51.171860    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0819 10:27:51.171868    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0819 10:27:51.172685    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.172591    4845 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02 ...
	I0819 10:27:51.544884    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.544910    4789 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:27:51.544922    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:27:51.571631    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:27:51.571653    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:51.571680    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571706    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571739    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:51.571767    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:51.571780    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:51.574668    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Pid is 4850
	I0819 10:27:51.575734    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:27:51.575757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.575783    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:51.576702    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:51.576759    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:51.576778    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:51.576816    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:51.576830    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:51.576844    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:51.582262    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:51.590515    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:51.591362    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:51.591388    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:51.591397    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:51.591407    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:51.978930    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:51.978947    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:52.094059    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:52.094091    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:52.094127    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:52.094142    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:52.094869    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:52.094879    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:53.577521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0819 10:27:53.577541    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:53.577636    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:53.578446    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:53.578461    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:53.578472    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:53.578481    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:53.578489    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:53.578507    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:55.579485    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0819 10:27:55.579501    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:55.579576    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:55.580358    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:55.580387    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:55.580414    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:55.580426    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:55.580434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:55.580442    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.581588    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0819 10:27:57.581603    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:57.581681    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:57.582486    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:57.582510    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:57.582521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:57.582530    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:57.582540    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:57.582548    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.680321    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:27:57.680434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:27:57.680445    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:27:57.704982    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:27:59.583757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0819 10:27:59.583772    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:59.583842    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:59.584652    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:59.584696    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:59.584710    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:59.584720    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:59.584729    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:59.584737    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:28:01.585137    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0819 10:28:01.585154    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.585235    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.585996    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:28:01.586042    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:28:01.586055    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:28:01.586080    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:28:01.586086    4789 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:28:01.586098    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:01.586694    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586794    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586889    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:28:01.586896    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:28:01.586980    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.587029    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.587790    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:28:01.587796    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:28:01.587800    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:28:01.587804    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:01.587881    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:01.587956    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588060    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588138    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:01.588256    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:01.588435    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:01.588443    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:28:02.645180    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.645193    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:28:02.645198    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.645326    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.645422    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645501    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645583    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.645718    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.645869    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.645877    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:28:02.700961    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:28:02.700992    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:28:02.700998    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:28:02.701003    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701132    4789 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:28:02.701143    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701237    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.701327    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.701424    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701502    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701588    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.701720    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.701855    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.701864    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:28:02.773500    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:28:02.773515    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.773649    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.773737    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773840    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773945    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.774071    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.774226    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.774237    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:28:02.838956    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.838971    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:28:02.838984    4789 buildroot.go:174] setting up certificates
	I0819 10:28:02.838992    4789 provision.go:84] configureAuth start
	I0819 10:28:02.838998    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.839135    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:02.839223    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.839322    4789 provision.go:143] copyHostCerts
	I0819 10:28:02.839347    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839393    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:28:02.839399    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839532    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:28:02.839738    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839769    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:28:02.839774    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839845    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:28:02.839992    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840021    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:28:02.840025    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840090    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:28:02.840244    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:28:02.878856    4789 provision.go:177] copyRemoteCerts
	I0819 10:28:02.878899    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:28:02.878912    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.879041    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.879132    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.879231    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.879330    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:02.914748    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:28:02.914819    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:28:02.934608    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:28:02.934673    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:28:02.954833    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:28:02.954900    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:28:02.974652    4789 provision.go:87] duration metric: took 135.649275ms to configureAuth
	I0819 10:28:02.974666    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:28:02.974809    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:02.974823    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:02.974958    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.975063    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.975147    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975219    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975328    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.975454    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.975601    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.975609    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:28:03.033628    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:28:03.033639    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:28:03.033715    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:28:03.033730    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.033861    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.033950    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034053    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034140    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.034264    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.034412    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.034459    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:28:03.102644    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:28:03.102663    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.102811    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.102898    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.102999    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.103120    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.103244    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.103390    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.103404    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:28:04.637367    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:28:04.637381    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:28:04.637388    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0819 10:28:04.637524    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:28:04.637530    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:28:04.637534    4789 client.go:171] duration metric: took 13.771742286s to LocalClient.Create
	I0819 10:28:04.637544    4789 start.go:167] duration metric: took 13.771771513s to libmachine.API.Create "ha-431000"
	I0819 10:28:04.637550    4789 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:28:04.637557    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:28:04.637566    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.637712    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:28:04.637723    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.637834    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.637926    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.638026    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.638127    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.678475    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:28:04.682965    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:28:04.682980    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:28:04.683079    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:28:04.683246    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:28:04.683253    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:28:04.683434    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:28:04.695086    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:04.723279    4789 start.go:296] duration metric: took 85.720185ms for postStartSetup
	I0819 10:28:04.723311    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:04.723943    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.724123    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:28:04.724446    4789 start.go:128] duration metric: took 13.890752069s to createHost
	I0819 10:28:04.724460    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.724558    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.724679    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724786    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724871    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.724979    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:04.725097    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:04.725103    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:28:04.784682    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088484.852271103
	
	I0819 10:28:04.784694    4789 fix.go:216] guest clock: 1724088484.852271103
	I0819 10:28:04.784698    4789 fix.go:229] Guest: 2024-08-19 10:28:04.852271103 -0700 PDT Remote: 2024-08-19 10:28:04.724454 -0700 PDT m=+55.319126445 (delta=127.817103ms)
	I0819 10:28:04.784725    4789 fix.go:200] guest clock delta is within tolerance: 127.817103ms
	I0819 10:28:04.784731    4789 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.951104834s
	I0819 10:28:04.784750    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.784884    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.807240    4789 out.go:177] * Found network options:
	I0819 10:28:04.829600    4789 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:28:04.851548    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.851607    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852495    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852747    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852876    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:28:04.852915    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:28:04.852962    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.853080    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:28:04.853100    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.853127    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853372    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853394    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853596    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853633    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853742    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853804    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.853880    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:28:04.886788    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:28:04.886847    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:28:04.931189    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:28:04.931209    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:04.931315    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:04.947443    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:28:04.955693    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:28:04.964155    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:28:04.964197    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:28:04.972493    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.980548    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:28:04.988709    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.996856    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:28:05.005271    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:28:05.013575    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:28:05.021801    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:28:05.030285    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:28:05.037842    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:28:05.045332    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.140730    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:28:05.159555    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:05.159625    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:28:05.177222    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.189624    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:28:05.203743    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.214606    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.224836    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:28:05.249649    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.261132    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:05.276191    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:28:05.279129    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:28:05.287175    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:28:05.300748    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:28:05.396444    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:28:05.505778    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:28:05.505805    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:28:05.520914    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.616215    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:28:07.911303    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.295016426s)
	I0819 10:28:07.911366    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:28:07.923467    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:28:07.938312    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:07.949283    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:28:08.046922    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:28:08.152880    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.256594    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:28:08.271339    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:08.283089    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.384798    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:28:08.441813    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:28:08.441881    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:28:08.446421    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:28:08.446473    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:28:08.449807    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:28:08.479621    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:28:08.479690    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.496571    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.537488    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:28:08.579078    4789 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:28:08.603340    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:08.603786    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:28:08.608372    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:08.618166    4789 mustload.go:65] Loading cluster: ha-431000
	I0819 10:28:08.618314    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:08.618533    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.618549    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.627122    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51170
	I0819 10:28:08.627459    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.627845    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.627857    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.628097    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.628239    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:28:08.628342    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:08.628430    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:28:08.629353    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:08.629592    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.629608    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.638041    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51172
	I0819 10:28:08.638388    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.638753    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.638770    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.638992    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.639108    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:08.639209    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:28:08.639216    4789 certs.go:194] generating shared ca certs ...
	I0819 10:28:08.639225    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.639357    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:28:08.639425    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:28:08.639434    4789 certs.go:256] generating profile certs ...
	I0819 10:28:08.639538    4789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:28:08.639562    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788
	I0819 10:28:08.639575    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0819 10:28:08.693749    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 ...
	I0819 10:28:08.693766    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788: {Name:mkade16cb35e521e9e55fc42d7cb129c8b94b782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694149    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 ...
	I0819 10:28:08.694160    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788: {Name:mkeae0a28d48da45f84299952289f15db5f944f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694378    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:28:08.694703    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:28:08.694954    4789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:28:08.694964    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:28:08.694987    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:28:08.695006    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:28:08.695024    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:28:08.695042    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:28:08.695060    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:28:08.695078    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:28:08.695096    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:28:08.695175    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:28:08.695213    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:28:08.695228    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:28:08.695261    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:28:08.695290    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:28:08.695321    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:28:08.695400    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:08.695438    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:28:08.695462    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:28:08.695482    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:08.695511    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:08.695664    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:08.695745    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:08.695845    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:08.695925    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:08.729193    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:28:08.736181    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:28:08.748665    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:28:08.751826    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:28:08.773481    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:28:08.777252    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:28:08.787546    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:28:08.791015    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:28:08.800105    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:28:08.803218    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:28:08.812240    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:28:08.815351    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:28:08.824083    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:28:08.844052    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:28:08.864107    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:28:08.884612    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:28:08.904284    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 10:28:08.924397    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:28:08.944026    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:28:08.964689    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:28:08.984934    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:28:09.004413    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:28:09.024043    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:28:09.043924    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:28:09.058066    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:28:09.071585    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:28:09.085080    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:28:09.098536    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:28:09.112048    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:28:09.125242    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:28:09.139717    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:28:09.144032    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:28:09.152602    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.155967    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.156009    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.160192    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:28:09.168568    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:28:09.176997    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180533    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180568    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.184799    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:28:09.193356    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:28:09.201811    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205453    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205494    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.209760    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:28:09.218392    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:28:09.222392    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:28:09.222437    4789 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:28:09.222498    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:28:09.222516    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:28:09.222559    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:28:09.234408    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:28:09.234452    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:28:09.234506    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.242939    4789 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:28:09.242994    4789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0819 10:28:09.251336    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 10:28:11.797289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:28:11.809069    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.809192    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.812267    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:28:11.812291    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:28:12.469259    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.469340    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.472845    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:28:12.472869    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:28:13.348737    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.348820    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.352429    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:28:13.352449    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:28:13.542994    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:28:13.550937    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:28:13.564187    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:28:13.577654    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:28:13.591433    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:28:13.594347    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:13.604347    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:13.710422    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:13.730131    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:13.730407    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:13.730448    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:13.739474    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51199
	I0819 10:28:13.739816    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:13.740174    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:13.740190    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:13.740438    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:13.740564    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:13.740661    4789 start.go:317] joinCluster: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:28:13.740750    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 10:28:13.740767    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:13.740857    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:13.740939    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:13.741027    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:13.741101    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:13.815525    4789 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:13.815563    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0819 10:28:41.108330    4789 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (27.292143754s)
	I0819 10:28:41.108351    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 10:28:41.504714    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000-m02 minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=false
	I0819 10:28:41.585348    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-431000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 10:28:41.693283    4789 start.go:319] duration metric: took 27.951997328s to joinCluster
	I0819 10:28:41.693326    4789 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:41.693537    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:41.715528    4789 out.go:177] * Verifying Kubernetes components...
	I0819 10:28:41.790354    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:41.995139    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:42.017369    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:28:42.017608    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:28:42.017650    4789 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:28:42.017827    4789 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:42.017919    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.017925    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.017930    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.017935    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.025432    4789 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 10:28:42.518902    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.518917    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.518923    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.518927    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.521742    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:43.018396    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.018411    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.018417    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.018421    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.021454    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:43.518031    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.518083    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.518106    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.518116    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.522999    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:44.018193    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.018219    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.018231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.018237    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.021854    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:44.022387    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:44.518152    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.518189    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.518196    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.518199    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.520027    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.019772    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.019792    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.019799    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.019803    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.021628    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.518039    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.518053    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.518059    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.518064    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.520113    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:46.018198    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.018232    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.018239    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.018243    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.020136    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.518474    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.518490    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.518496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.518499    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.520505    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.520916    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:47.019124    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.019150    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.019162    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.022729    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:47.518316    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.518341    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.518351    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.518356    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.520471    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:48.019594    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.019620    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.019630    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.019636    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.023447    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:48.518492    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.518526    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.518583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.518593    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.523421    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:48.523787    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:49.019217    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.019242    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.019254    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.019260    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.022862    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:49.520299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.520324    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.520337    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.520342    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.523532    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.019383    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.019412    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.019424    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.019430    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.022847    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.519489    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.519503    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.519511    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.519515    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.522131    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:51.019130    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.019153    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.019163    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.022497    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:51.022894    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:51.518391    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.518448    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.518465    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.518476    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.521848    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.019014    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.019045    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.019103    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.019117    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.022339    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.519630    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.519644    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.519651    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.519655    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.522019    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.018435    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.018460    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.018472    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.018480    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.021850    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:53.518299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.518340    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.518349    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.518355    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.520795    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.521268    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:54.020380    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.020406    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.020418    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.020423    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.024178    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:54.519346    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.519364    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.519383    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.519387    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.521155    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:55.020400    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.020425    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.020437    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.020444    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.024326    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:55.519229    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.519245    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.519264    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.519268    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.521435    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:55.521852    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:56.019678    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.019703    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.019714    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.019719    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.023317    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:56.518539    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.518563    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.518576    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.518581    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.521781    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.020424    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.020449    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.020460    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.020465    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.024114    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.519399    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.519428    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.519468    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.519475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.522788    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.523223    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:58.018734    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.018759    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.018770    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.018777    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.022242    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.518348    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.518359    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.518371    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.518375    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.522907    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.523168    4789 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:28:58.523182    4789 node_ready.go:38] duration metric: took 16.504973252s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:58.523189    4789 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:28:58.523237    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:28:58.523243    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.523249    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.523253    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.528083    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.532699    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.532761    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:28:58.532768    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.532774    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.532776    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.535978    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.536344    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.536351    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.536358    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.536361    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.538061    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.538368    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.538377    4789 pod_ready.go:82] duration metric: took 5.660556ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538383    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538413    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:28:58.538417    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.538423    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.538428    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.540013    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.540457    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.540465    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.540471    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.540475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.542120    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.542393    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.542400    4789 pod_ready.go:82] duration metric: took 4.011453ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542406    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542439    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:28:58.542444    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.542449    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.542454    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.543986    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.544340    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.544347    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.544353    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.544356    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.545868    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.546173    4789 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.546181    4789 pod_ready.go:82] duration metric: took 3.769725ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546187    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546221    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:28:58.546226    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.546231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.546234    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.547638    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.548110    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.548118    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.548123    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.548127    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.549514    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.549853    4789 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.549860    4789 pod_ready.go:82] duration metric: took 3.668598ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.549868    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.718822    4789 request.go:632] Waited for 168.888912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718861    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718867    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.718872    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.718876    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.721032    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:58.919673    4789 request.go:632] Waited for 198.011193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919731    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919740    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.919750    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.919807    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.923236    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.923670    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.923682    4789 pod_ready.go:82] duration metric: took 373.799986ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.923691    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.119399    4789 request.go:632] Waited for 195.629207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119559    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119572    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.119583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.119589    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.122804    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.318619    4789 request.go:632] Waited for 195.030736ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318674    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318695    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.318702    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.318705    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.320812    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.321165    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.321173    4789 pod_ready.go:82] duration metric: took 397.466691ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.321180    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.520541    4789 request.go:632] Waited for 199.292765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520642    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520652    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.520663    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.520672    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.524463    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.718728    4789 request.go:632] Waited for 192.615056ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718803    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718811    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.718818    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.718823    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.720955    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.721397    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.721407    4789 pod_ready.go:82] duration metric: took 400.213219ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.721415    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.918907    4789 request.go:632] Waited for 197.434904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919004    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919014    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.919024    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.919030    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.922451    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.119192    4789 request.go:632] Waited for 196.220574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119263    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119272    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.119286    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.119297    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.122630    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.122957    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.122968    4789 pod_ready.go:82] duration metric: took 401.538458ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.122977    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.320524    4789 request.go:632] Waited for 197.475989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320660    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320672    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.320681    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.320689    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.323985    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.519403    4789 request.go:632] Waited for 194.628597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519535    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519546    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.519560    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.519568    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.523121    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.523435    4789 pod_ready.go:93] pod "kube-proxy-5h7j2" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.523449    4789 pod_ready.go:82] duration metric: took 400.456993ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.523457    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.718666    4789 request.go:632] Waited for 195.15054ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718742    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718752    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.718786    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.718800    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.721920    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.918782    4789 request.go:632] Waited for 196.40919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918873    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918882    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.918896    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.918906    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.922355    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.922815    4789 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.922824    4789 pod_ready.go:82] duration metric: took 399.351509ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.922830    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.118854    4789 request.go:632] Waited for 195.977175ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118950    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118965    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.118981    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.118987    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.122683    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.318886    4789 request.go:632] Waited for 195.887859ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319029    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319042    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.319053    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.319063    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.322689    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.323187    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.323200    4789 pod_ready.go:82] duration metric: took 400.355182ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.323208    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.518928    4789 request.go:632] Waited for 195.662505ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519043    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519057    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.519070    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.519077    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.522736    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.718819    4789 request.go:632] Waited for 195.65197ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718885    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718891    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.718899    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.718905    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.721246    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:29:01.721682    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.721691    4789 pod_ready.go:82] duration metric: took 398.467113ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.721701    4789 pod_ready.go:39] duration metric: took 3.198431164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:29:01.721718    4789 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:29:01.721774    4789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:29:01.735634    4789 api_server.go:72] duration metric: took 20.041851081s to wait for apiserver process to appear ...
	I0819 10:29:01.735647    4789 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:29:01.735663    4789 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:29:01.738815    4789 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:29:01.738848    4789 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:29:01.738854    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.738860    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.738864    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.739526    4789 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:29:01.739580    4789 api_server.go:141] control plane version: v1.31.0
	I0819 10:29:01.739589    4789 api_server.go:131] duration metric: took 3.937962ms to wait for apiserver health ...
	I0819 10:29:01.739594    4789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:29:01.918638    4789 request.go:632] Waited for 178.995687ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918733    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918745    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.918757    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.918762    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.922864    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:01.926606    4789 system_pods.go:59] 17 kube-system pods found
	I0819 10:29:01.926628    4789 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:01.926633    4789 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:01.926636    4789 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:01.926640    4789 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:01.926642    4789 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:01.926645    4789 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:01.926647    4789 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:01.926650    4789 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:01.926653    4789 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:01.926656    4789 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:01.926659    4789 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:01.926666    4789 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:01.926669    4789 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:01.926672    4789 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:01.926674    4789 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:01.926677    4789 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:01.926680    4789 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:01.926684    4789 system_pods.go:74] duration metric: took 187.080965ms to wait for pod list to return data ...
	I0819 10:29:01.926689    4789 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:29:02.119406    4789 request.go:632] Waited for 192.625822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119507    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119517    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.119528    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.119535    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.123120    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.123283    4789 default_sa.go:45] found service account: "default"
	I0819 10:29:02.123293    4789 default_sa.go:55] duration metric: took 196.595366ms for default service account to be created ...
	I0819 10:29:02.123300    4789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:29:02.319795    4789 request.go:632] Waited for 196.43255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319928    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319939    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.319947    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.319954    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.324586    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:02.328058    4789 system_pods.go:86] 17 kube-system pods found
	I0819 10:29:02.328071    4789 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:02.328075    4789 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:02.328078    4789 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:02.328083    4789 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:02.328086    4789 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:02.328088    4789 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:02.328091    4789 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:02.328093    4789 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:02.328096    4789 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:02.328098    4789 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:02.328101    4789 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:02.328103    4789 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:02.328106    4789 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:02.328109    4789 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:02.328112    4789 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:02.328115    4789 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:02.328117    4789 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:02.328122    4789 system_pods.go:126] duration metric: took 204.813151ms to wait for k8s-apps to be running ...
	I0819 10:29:02.328133    4789 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:29:02.328183    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:29:02.340002    4789 system_svc.go:56] duration metric: took 11.865981ms WaitForService to wait for kubelet
	I0819 10:29:02.340017    4789 kubeadm.go:582] duration metric: took 20.646222268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:29:02.340034    4789 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:29:02.518831    4789 request.go:632] Waited for 178.726274ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518969    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518980    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.518991    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.518998    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.522659    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.523326    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523339    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523348    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523351    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523354    4789 node_conditions.go:105] duration metric: took 183.311856ms to run NodePressure ...
	I0819 10:29:02.523361    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:29:02.523378    4789 start.go:255] writing updated cluster config ...
	I0819 10:29:02.544110    4789 out.go:201] 
	I0819 10:29:02.566227    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:02.566358    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.588965    4789 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:29:02.630777    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:29:02.630803    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:29:02.630953    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:29:02.630966    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:29:02.631053    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.631767    4789 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:29:02.631849    4789 start.go:364] duration metric: took 64.609µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:29:02.631869    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:29:02.631978    4789 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0819 10:29:02.652968    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:29:02.653116    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:29:02.653158    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:29:02.663539    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51204
	I0819 10:29:02.663925    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:29:02.664263    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:29:02.664277    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:29:02.664539    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:29:02.664672    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:02.664758    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:02.664867    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:29:02.664899    4789 client.go:168] LocalClient.Create starting
	I0819 10:29:02.664932    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:29:02.664992    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665005    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665051    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:29:02.665087    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665103    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665116    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:29:02.665122    4789 main.go:141] libmachine: (ha-431000-m03) Calling .PreCreateCheck
	I0819 10:29:02.665218    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.665228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:02.674109    4789 main.go:141] libmachine: Creating machine...
	I0819 10:29:02.674126    4789 main.go:141] libmachine: (ha-431000-m03) Calling .Create
	I0819 10:29:02.674302    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.674550    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.674293    4918 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:29:02.674675    4789 main.go:141] libmachine: (ha-431000-m03) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:29:02.956098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.955977    4918 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa...
	I0819 10:29:03.041212    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.041121    4918 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk...
	I0819 10:29:03.041230    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing magic tar header
	I0819 10:29:03.041239    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing SSH key tar header
	I0819 10:29:03.042098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.042003    4918 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03 ...
	I0819 10:29:03.582755    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.582783    4789 main.go:141] libmachine: (ha-431000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:29:03.582846    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:29:03.618942    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:29:03.618960    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:29:03.619021    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619049    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619085    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:29:03.619116    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:29:03.619133    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:29:03.621990    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Pid is 4921
	I0819 10:29:03.622461    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:29:03.622497    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.622585    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:03.623424    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:03.623486    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:03.623500    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:03.623537    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:03.623548    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:03.623558    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:03.623568    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:03.629643    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:29:03.638725    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:29:03.639577    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:03.639599    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:03.639609    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:03.639622    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.022361    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:29:04.022375    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:29:04.137228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:04.137262    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:04.137274    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:04.137284    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.138001    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:29:04.138016    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:29:05.623879    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 1
	I0819 10:29:05.623896    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:05.624023    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:05.624809    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:05.624873    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:05.624888    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:05.624904    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:05.624917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:05.624926    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:05.624935    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:07.626679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 2
	I0819 10:29:07.626696    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:07.626779    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:07.627539    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:07.627582    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:07.627592    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:07.627610    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:07.627619    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:07.627626    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:07.627635    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.627812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 3
	I0819 10:29:09.627828    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:09.627917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:09.628679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:09.628746    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:09.628777    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:09.628791    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:09.628799    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:09.628806    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:09.628812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.722721    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:29:09.722792    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:29:09.722802    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:29:09.745848    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:29:11.630390    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 4
	I0819 10:29:11.630407    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:11.630495    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:11.631275    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:11.631321    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:11.631331    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:11.631340    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:11.631359    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:11.631366    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:11.631387    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:13.633236    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 5
	I0819 10:29:13.633251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.633339    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.634147    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:13.634209    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0819 10:29:13.634221    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:29:13.634228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:29:13.634232    4789 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:29:13.634299    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:13.634943    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635064    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635157    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:29:13.635165    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:29:13.635251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.635310    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.636120    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:29:13.636129    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:29:13.636133    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:29:13.636138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:13.636228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:13.636309    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636392    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636477    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:13.636587    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:13.636755    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:13.636763    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:29:14.697546    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.697558    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:29:14.697564    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.697702    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.697798    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.697887    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.698009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.698168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.698318    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.698326    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:29:14.765778    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:29:14.765827    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:29:14.765833    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:29:14.765839    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.765977    4789 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:29:14.765988    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.766081    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.766185    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.766270    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766369    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766481    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.766635    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.766783    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.766792    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:29:14.841753    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:29:14.841769    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.841901    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.842009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842101    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842195    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.842324    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.842477    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.842489    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:29:14.911764    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.911779    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:29:14.911793    4789 buildroot.go:174] setting up certificates
	I0819 10:29:14.911800    4789 provision.go:84] configureAuth start
	I0819 10:29:14.911807    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.911942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:14.912037    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.912110    4789 provision.go:143] copyHostCerts
	I0819 10:29:14.912141    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912187    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:29:14.912193    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912326    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:29:14.912504    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912534    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:29:14.912539    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912651    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:29:14.912808    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912854    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:29:14.912859    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912935    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:29:14.913083    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:29:15.064390    4789 provision.go:177] copyRemoteCerts
	I0819 10:29:15.064440    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:29:15.064455    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.064599    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.064695    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.064786    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.064886    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:15.103656    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:29:15.103727    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:29:15.123430    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:29:15.123497    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:29:15.143265    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:29:15.143333    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:29:15.162885    4789 provision.go:87] duration metric: took 251.064942ms to configureAuth
	I0819 10:29:15.162900    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:29:15.163052    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:15.163065    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:15.163221    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.163329    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.163417    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163506    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.163693    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.163824    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.163831    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:29:15.225270    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:29:15.225286    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:29:15.225356    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:29:15.225368    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.225510    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.225619    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225708    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225810    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.225948    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.226090    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.226134    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:29:15.299640    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:29:15.299658    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.299797    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.299889    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.299978    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.300067    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.300202    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.300355    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.300368    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:29:16.819930    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:29:16.819945    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:29:16.819953    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetURL
	I0819 10:29:16.820095    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:29:16.820107    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:29:16.820113    4789 client.go:171] duration metric: took 14.154897138s to LocalClient.Create
	I0819 10:29:16.820124    4789 start.go:167] duration metric: took 14.154947877s to libmachine.API.Create "ha-431000"
	I0819 10:29:16.820129    4789 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:29:16.820136    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:29:16.820145    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.820288    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:29:16.820301    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.820396    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.820494    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.820582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.820664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:16.862693    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:29:16.866416    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:29:16.866431    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:29:16.866540    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:29:16.866725    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:29:16.866732    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:29:16.866944    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:29:16.874578    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:29:16.904910    4789 start.go:296] duration metric: took 84.771069ms for postStartSetup
	I0819 10:29:16.904942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:16.905569    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.905740    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:16.906122    4789 start.go:128] duration metric: took 14.273822612s to createHost
	I0819 10:29:16.906138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.906230    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.906303    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906387    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906475    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.906573    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:16.906690    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:16.906697    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:29:16.969389    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088556.958185685
	
	I0819 10:29:16.969401    4789 fix.go:216] guest clock: 1724088556.958185685
	I0819 10:29:16.969406    4789 fix.go:229] Guest: 2024-08-19 10:29:16.958185685 -0700 PDT Remote: 2024-08-19 10:29:16.906131 -0700 PDT m=+127.499217490 (delta=52.054685ms)
	I0819 10:29:16.969416    4789 fix.go:200] guest clock delta is within tolerance: 52.054685ms
	I0819 10:29:16.969419    4789 start.go:83] releasing machines lock for "ha-431000-m03", held for 14.337247496s
	I0819 10:29:16.969437    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.969573    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.992258    4789 out.go:177] * Found network options:
	I0819 10:29:17.014265    4789 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:29:17.037508    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.037542    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.037561    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038432    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038682    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038835    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:29:17.038873    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:29:17.038922    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.038957    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.039067    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:29:17.039087    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:17.039116    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039298    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039332    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039497    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039590    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039679    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039721    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:17.039809    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:29:17.074320    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:29:17.074385    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:29:17.120302    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:29:17.120318    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.120398    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.135851    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:29:17.144402    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:29:17.152735    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.152784    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:29:17.161185    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.169599    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:29:17.177908    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.186319    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:29:17.194967    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:29:17.203702    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:29:17.212228    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:29:17.220632    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:29:17.228164    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:29:17.235717    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.329551    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:29:17.348829    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.348909    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:29:17.363903    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.374976    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:29:17.393061    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.404238    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.414728    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:29:17.438632    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.449143    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.464536    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:29:17.467445    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:29:17.474809    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:29:17.488421    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:29:17.581504    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:29:17.684960    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.684980    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:29:17.699658    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.803979    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:30:18.773891    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.968555005s)
	I0819 10:30:18.774012    4789 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:30:18.808676    4789 out.go:201] 
	W0819 10:30:18.829152    4789 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:29:15 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570013158Z" level=info msg="Starting up"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570447745Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.572542412Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.584880924Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603137975Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603181724Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603219390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603233227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603303033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603338653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603471354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603509282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603521199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603528665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603591360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603811486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605351283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605389063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605504861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605538594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605610859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605677674Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607907354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607976584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607991948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608010711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608023403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608093276Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608724366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608874333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608913351Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608929178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608943960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608968346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609006571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609021660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609032833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609044499Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609055485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609066063Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609088279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609103865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609115537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609130257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609139734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609151164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609161605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609173829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609185591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609200246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609211000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609224200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609237871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609251525Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609296616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609316285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609327369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609362155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609478815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609512436Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609530768Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609541857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609553085Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609563545Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610591556Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610680787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610769049Z" level=info msg="containerd successfully booted in 0.026402s"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.601341697Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.606766805Z" level=info msg="Loading containers: start."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.688780306Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.769433920Z" level=info msg="Loading containers: done."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776749571Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776865122Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.804822251Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.805010917Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:29:16 ha-431000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.814047535Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:29:17 ha-431000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815466623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815881336Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815956644Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.816022765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:18 ha-431000-m03 dockerd[921]: time="2024-08-19T17:29:18.853267859Z" level=info msg="Starting up"
	Aug 19 17:30:18 ha-431000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:30:18.829235    4789 out.go:270] * 
	W0819 10:30:18.830413    4789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:30:18.888275    4789 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621465217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 dockerd[1275]: time="2024-08-19T17:30:22.621560978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:22 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6d38fc70c811c9647892071fd07ef2e6455806b20e204cd6583df80c81ba64b7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 19 17:30:23 ha-431000 cri-dockerd[1168]: time="2024-08-19T17:30:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040175789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040258993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040272849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:30:24 ha-431000 dockerd[1275]: time="2024-08-19T17:30:24.040810082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:43:15 ha-431000 dockerd[1269]: time="2024-08-19T17:43:15.200751636Z" level=info msg="ignoring event" container=ed733554ed160b888c1f7459530b3d389ee69bed96d213508d208a4f2926cfc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.201258079Z" level=info msg="shim disconnected" id=ed733554ed160b888c1f7459530b3d389ee69bed96d213508d208a4f2926cfc3 namespace=moby
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.201498173Z" level=warning msg="cleaning up after shim disconnected" id=ed733554ed160b888c1f7459530b3d389ee69bed96d213508d208a4f2926cfc3 namespace=moby
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.201540415Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.540578680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.540705518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.540715759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:43:15 ha-431000 dockerd[1275]: time="2024-08-19T17:43:15.540887282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.004579691Z" level=info msg="shim disconnected" id=e7cacf032435fe5fd74c9ff947e51071e84739d9cdfb1d3f0b1c3f7f72df50f6 namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1269]: time="2024-08-19T17:43:16.004599876Z" level=info msg="ignoring event" container=e7cacf032435fe5fd74c9ff947e51071e84739d9cdfb1d3f0b1c3f7f72df50f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.004799413Z" level=warning msg="cleaning up after shim disconnected" id=e7cacf032435fe5fd74c9ff947e51071e84739d9cdfb1d3f0b1c3f7f72df50f6 namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.004913234Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.023070076Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:43:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.540369658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.546150369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.546220724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.546357823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e3a7fa32f1ca2       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       1                   868ee98671e83       storage-provisioner
	73731822fbc4d       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  1                   90ec229d87c2c       kube-vip-ha-431000
	da6e4a61b6cf8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   14 minutes ago       Running             busybox                   0                   6d38fc70c811c       busybox-7dff88458-x7m6m
	b9d1bccf00c94       cbb01a7bd410d                                                                                         16 minutes ago       Running             coredns                   0                   74fd2f09b011a       coredns-6f6b679f8f-hr2qx
	e7cacf032435f       6e38f40d628db                                                                                         16 minutes ago       Exited              storage-provisioner       0                   868ee98671e83       storage-provisioner
	a3891ab602da5       cbb01a7bd410d                                                                                         16 minutes ago       Running             coredns                   0                   c3745c7f8fb9f       coredns-6f6b679f8f-vc76p
	37cd2e9ed2f34       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              16 minutes ago       Running             kindnet-cni               0                   568b6f1ff9aaf       kindnet-lvdbg
	889ab608901bb       ad83b2ca7b09e                                                                                         16 minutes ago       Running             kube-proxy                0                   fde7b27c3d1a5       kube-proxy-5l56s
	ed733554ed160       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     17 minutes ago       Exited              kube-vip                  0                   90ec229d87c2c       kube-vip-ha-431000
	11d9cd3b2f49f       1766f54c897f0                                                                                         17 minutes ago       Running             kube-scheduler            0                   4c252909f338f       kube-scheduler-ha-431000
	262471364c991       604f5db92eaa8                                                                                         17 minutes ago       Running             kube-apiserver            0                   5a0fe916eaf1d       kube-apiserver-ha-431000
	39fe08877284d       2e96e5913fc06                                                                                         17 minutes ago       Running             etcd                      0                   fc30d54d1b565       etcd-ha-431000
	2801f8f44773b       045733566833c                                                                                         17 minutes ago       Running             kube-controller-manager   0                   80d21805f230b       kube-controller-manager-ha-431000
	
	
	==> coredns [a3891ab602da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40841 - 35632 "HINFO IN 8043641794425982319.4992720317295253252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008506209s
	[INFO] 10.244.1.2:51889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132717s
	[INFO] 10.244.1.2:37985 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001601417s
	[INFO] 10.244.1.2:55682 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.007910651s
	[INFO] 10.244.0.4:38616 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.000569215s
	[INFO] 10.244.0.4:47772 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000054313s
	[INFO] 10.244.1.2:49768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135774s
	[INFO] 10.244.1.2:55729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00095124s
	[INFO] 10.244.1.2:38602 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000089444s
	[INFO] 10.244.1.2:52875 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099022s
	[INFO] 10.244.1.2:49308 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063848s
	[INFO] 10.244.0.4:57863 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000064923s
	[INFO] 10.244.0.4:40409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096347s
	[INFO] 10.244.1.2:34617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084305s
	[INFO] 10.244.1.2:55843 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058734s
	[INFO] 10.244.0.4:43213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096675s
	[INFO] 10.244.0.4:44050 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031036s
	[INFO] 10.244.1.2:49077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105574s
	[INFO] 10.244.1.2:57560 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000084227s
	[INFO] 10.244.1.2:40959 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135434s
	
	
	==> coredns [b9d1bccf00c9] <==
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54195 - 29045 "HINFO IN 6513715404119561949.1799819676960271336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.007921235s
	[INFO] 10.244.1.2:45210 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.055498798s
	[INFO] 10.244.0.4:53730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111076s
	[INFO] 10.244.0.4:51704 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000411643s
	[INFO] 10.244.1.2:54559 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088744s
	[INFO] 10.244.1.2:58642 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064137s
	[INFO] 10.244.1.2:34281 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000845538s
	[INFO] 10.244.0.4:53439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000058375s
	[INFO] 10.244.0.4:33951 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106207s
	[INFO] 10.244.0.4:38202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000034691s
	[INFO] 10.244.0.4:46478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119286s
	[INFO] 10.244.0.4:53704 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053613s
	[INFO] 10.244.0.4:42766 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051163s
	[INFO] 10.244.1.2:44413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116167s
	[INFO] 10.244.1.2:58453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067066s
	[INFO] 10.244.0.4:37472 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063597s
	[INFO] 10.244.0.4:59559 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033396s
	[INFO] 10.244.1.2:59906 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000120736s
	[INFO] 10.244.0.4:47175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120659s
	[INFO] 10.244.0.4:56722 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121072s
	[INFO] 10.244.0.4:43652 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174608s
	[INFO] 10.244.0.4:32818 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00017028s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +2.712596] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.230971] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.519395] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.106046] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.754357] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.260100] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.108326] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.116397] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.050322] kauditd_printk_skb: 139 callbacks suppressed
	[  +2.370658] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.100232] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.114416] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.133019] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.706453] systemd-fstab-generator[1261]: Ignoring "noauto" option for root device
	[  +0.055873] kauditd_printk_skb: 136 callbacks suppressed
	[  +2.542020] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +4.524199] systemd-fstab-generator[1651]: Ignoring "noauto" option for root device
	[  +0.058523] kauditd_printk_skb: 70 callbacks suppressed
	[  +7.145787] systemd-fstab-generator[2146]: Ignoring "noauto" option for root device
	[  +0.090131] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.001426] kauditd_printk_skb: 35 callbacks suppressed
	[Aug19 17:28] kauditd_printk_skb: 15 callbacks suppressed
	[ +36.695422] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [39fe08877284] <==
	{"level":"info","ts":"2024-08-19T17:44:45.785909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T17:44:45.786019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-19T17:44:45.786267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 3080] sent MsgPreVote request to c22c1f54a3cc7858 at term 2"}
	{"level":"warn","ts":"2024-08-19T17:44:45.831381Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502277735781,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T17:44:46.332513Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502277735781,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T17:44:46.833028Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502277735781,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T17:44:46.933811Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.000557517s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"}
	{"level":"info","ts":"2024-08-19T17:44:46.933944Z","caller":"traceutil/trace.go:171","msg":"trace[12551317] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"2.000699204s","start":"2024-08-19T17:44:44.933232Z","end":"2024-08-19T17:44:46.933931Z","steps":["trace[12551317] 'agreement among raft nodes before linearized reading'  (duration: 2.000554175s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:44:46.934316Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:44:44.933206Z","time spent":"2.000771745s","remote":"127.0.0.1:43166","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2024/08/19 17:44:46 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-08-19T17:44:47.086477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T17:44:47.086830Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T17:44:47.086938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-19T17:44:47.087041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 3080] sent MsgPreVote request to c22c1f54a3cc7858 at term 2"}
	{"level":"warn","ts":"2024-08-19T17:44:47.333449Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502277735781,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T17:44:47.344635Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:44:40.344105Z","time spent":"7.000525803s","remote":"127.0.0.1:43324","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	{"level":"warn","ts":"2024-08-19T17:44:47.344703Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:44:40.344066Z","time spent":"7.000636944s","remote":"127.0.0.1:43324","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	{"level":"warn","ts":"2024-08-19T17:44:47.344824Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:44:40.344489Z","time spent":"7.000331285s","remote":"127.0.0.1:43324","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	{"level":"warn","ts":"2024-08-19T17:44:47.834209Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502277735781,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T17:44:48.335438Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502277735781,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-19T17:44:48.385717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T17:44:48.385764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T17:44:48.385779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-08-19T17:44:48.385796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 3080] sent MsgPreVote request to c22c1f54a3cc7858 at term 2"}
	{"level":"warn","ts":"2024-08-19T17:44:48.836100Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502277735781,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 17:44:49 up 17 min,  0 users,  load average: 0.80, 0.56, 0.26
	Linux ha-431000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [37cd2e9ed2f3] <==
	I0819 17:44:03.914439       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:44:13.915071       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:44:13.915216       1 main.go:299] handling current node
	I0819 17:44:13.915235       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:44:13.915245       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:44:13.915591       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:44:13.915662       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:44:23.913426       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:44:23.913516       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:44:23.913665       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:44:23.913725       1 main.go:299] handling current node
	I0819 17:44:23.913749       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:44:23.913762       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:44:33.915871       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:44:33.915976       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:44:33.916514       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:44:33.916580       1 main.go:299] handling current node
	I0819 17:44:33.916600       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:44:33.916609       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:44:43.913381       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:44:43.913603       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:44:43.914101       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:44:43.914492       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:44:43.914789       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:44:43.914957       1 main.go:299] handling current node
	
	
	==> kube-apiserver [262471364c99] <==
	E0819 17:44:49.329153       1 cacher.go:478] cacher (apiservices.apiregistration.k8s.io): unexpected ListAndWatch error: failed to list *apiregistration.APIService: etcdserver: request timed out; reinitializing...
	W0819 17:44:49.329192       1 reflector.go:561] storage/cacher.go:/csistoragecapacities: failed to list *storage.CSIStorageCapacity: etcdserver: request timed out
	E0819 17:44:49.329199       1 cacher.go:478] cacher (csistoragecapacities.storage.k8s.io): unexpected ListAndWatch error: failed to list *storage.CSIStorageCapacity: etcdserver: request timed out; reinitializing...
	I0819 17:44:49.329207       1 controller.go:157] Shutting down quota evaluator
	I0819 17:44:49.329214       1 controller.go:176] quota evaluator worker shutdown
	I0819 17:44:49.329245       1 controller.go:176] quota evaluator worker shutdown
	I0819 17:44:49.329249       1 controller.go:176] quota evaluator worker shutdown
	I0819 17:44:49.329332       1 controller.go:176] quota evaluator worker shutdown
	I0819 17:44:49.329341       1 controller.go:176] quota evaluator worker shutdown
	W0819 17:44:49.331274       1 reflector.go:561] storage/cacher.go:/validatingwebhookconfigurations: failed to list *admissionregistration.ValidatingWebhookConfiguration: etcdserver: request timed out
	E0819 17:44:49.331339       1 cacher.go:478] cacher (validatingwebhookconfigurations.admissionregistration.k8s.io): unexpected ListAndWatch error: failed to list *admissionregistration.ValidatingWebhookConfiguration: etcdserver: request timed out; reinitializing...
	W0819 17:44:49.331390       1 reflector.go:561] storage/cacher.go:/statefulsets: failed to list *apps.StatefulSet: etcdserver: request timed out
	E0819 17:44:49.331459       1 cacher.go:478] cacher (statefulsets.apps): unexpected ListAndWatch error: failed to list *apps.StatefulSet: etcdserver: request timed out; reinitializing...
	W0819 17:44:49.331558       1 reflector.go:561] storage/cacher.go:/jobs: failed to list *batch.Job: etcdserver: request timed out
	E0819 17:44:49.331926       1 cacher.go:478] cacher (jobs.batch): unexpected ListAndWatch error: failed to list *batch.Job: etcdserver: request timed out; reinitializing...
	W0819 17:44:49.331575       1 reflector.go:561] storage/cacher.go:/resourcequotas: failed to list *core.ResourceQuota: etcdserver: request timed out
	E0819 17:44:49.331987       1 cacher.go:478] cacher (resourcequotas): unexpected ListAndWatch error: failed to list *core.ResourceQuota: etcdserver: request timed out; reinitializing...
	W0819 17:44:49.331625       1 reflector.go:561] storage/cacher.go:/serviceaccounts: failed to list *core.ServiceAccount: etcdserver: request timed out
	E0819 17:44:49.332286       1 cacher.go:478] cacher (serviceaccounts): unexpected ListAndWatch error: failed to list *core.ServiceAccount: etcdserver: request timed out; reinitializing...
	W0819 17:44:49.331657       1 reflector.go:561] storage/cacher.go:/deployments: failed to list *apps.Deployment: etcdserver: request timed out
	E0819 17:44:49.332348       1 cacher.go:478] cacher (deployments.apps): unexpected ListAndWatch error: failed to list *apps.Deployment: etcdserver: request timed out; reinitializing...
	W0819 17:44:49.332215       1 reflector.go:561] storage/cacher.go:/poddisruptionbudgets: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out
	E0819 17:44:49.332447       1 cacher.go:478] cacher (poddisruptionbudgets.policy): unexpected ListAndWatch error: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out; reinitializing...
	W0819 17:44:49.332589       1 reflector.go:561] storage/cacher.go:/prioritylevelconfigurations: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out
	E0819 17:44:49.332622       1 cacher.go:478] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out; reinitializing...
	
	
	==> kube-controller-manager [2801f8f44773] <==
	I0819 17:44:14.929374       1 request.go:700] Waited for 1.298415002s, retries: 6, retry-after: 1s - retry-reason: due to server-side throttling, FlowSchema UID: "6fb80af4-8bca-46e7-ad3a-5028f0da03c7" - request: GET:https://192.169.0.5:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=2590&timeout=5m24s&timeoutSeconds=324&watch=true
	I0819 17:44:19.325361       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	E0819 17:44:26.333532       1 node_lifecycle_controller.go:978] "Error updating node" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" logger="node-lifecycle-controller" node="ha-431000-m02"
	E0819 17:44:26.333908       1 node_lifecycle_controller.go:978] "Error updating node" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" logger="node-lifecycle-controller" node="ha-431000-m04"
	E0819 17:44:26.334466       1 node_lifecycle_controller.go:978] "Error updating node" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" logger="node-lifecycle-controller" node="ha-431000"
	I0819 17:44:29.579544       1 request.go:700] Waited for 1.040088087s, retries: 1, retry-after: 1s - retry-reason: due to server-side throttling, FlowSchema UID: "6fb80af4-8bca-46e7-ad3a-5028f0da03c7" - request: GET:https://192.169.0.5:8443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=2609&timeout=6m37s&timeoutSeconds=397&watch=true
	E0819 17:44:35.332182       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-431000-m04"
	E0819 17:44:35.332281       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="etcdserver: request timed out" logger="node-lifecycle-controller" node=""
	E0819 17:44:35.339048       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-431000-m02"
	E0819 17:44:35.339207       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="etcdserver: request timed out" logger="node-lifecycle-controller" node=""
	E0819 17:44:35.339054       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-431000"
	E0819 17:44:35.339420       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="etcdserver: request timed out" logger="node-lifecycle-controller" node=""
	I0819 17:44:39.629178       1 request.go:700] Waited for 1.298956728s, retries: 7, retry-after: 1s - retry-reason: due to server-side throttling, FlowSchema UID: "6fb80af4-8bca-46e7-ad3a-5028f0da03c7" - request: GET:https://192.169.0.5:8443/apis/flowcontrol.apiserver.k8s.io/v1/flowschemas?allowWatchBookmarks=true&resourceVersion=2559&timeout=7m58s&timeoutSeconds=478&watch=true
	I0819 17:44:40.340071       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	E0819 17:44:47.348687       1 node_lifecycle_controller.go:978] "Error updating node" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" logger="node-lifecycle-controller" node="ha-431000"
	E0819 17:44:47.348687       1 node_lifecycle_controller.go:978] "Error updating node" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" logger="node-lifecycle-controller" node="ha-431000-m04"
	E0819 17:44:47.349760       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-431000"
	E0819 17:44:47.349803       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-431000-m04"
	E0819 17:44:47.349991       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.169.0.5:8443/api/v1/nodes/ha-431000-m04\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="node-lifecycle-controller" node=""
	E0819 17:44:47.349920       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.169.0.5:8443/api/v1/nodes/ha-431000\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="node-lifecycle-controller" node=""
	E0819 17:44:47.350203       1 node_lifecycle_controller.go:978] "Error updating node" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" logger="node-lifecycle-controller" node="ha-431000-m02"
	E0819 17:44:47.350688       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-431000-m02"
	E0819 17:44:47.350729       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="node-lifecycle-controller" node=""
	E0819 17:44:48.951070       1 resource_quota_controller.go:446] "Unhandled Error" err="failed to discover resources: Get \"https://192.169.0.5:8443/api\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	I0819 17:44:49.335185       1 garbagecollector.go:828] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.169.0.5:8443/api\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [889ab608901b] <==
	W0819 17:44:01.798192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:01.798451       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	E0819 17:44:01.798209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:04.856862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:04.857057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:04.857353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:04.857954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:04.860029       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:04.860226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:23.290432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:23.290751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:23.290543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:23.291205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:26.362595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:26.363019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:41.722266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:41.722341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:41.722406       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:41.722425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [11d9cd3b2f49] <==
	W0819 17:27:42.867998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:27:42.868077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.900445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:27:42.900541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:42.970545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:27:42.970765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:27:43.004003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:27:43.004103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:27:43.339820       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:30:22.272037       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	E0819 17:30:22.273195       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e37fe27d-f1bf-427d-a76d-96722b0c74a1(default/busybox-7dff88458-x7m6m) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x7m6m"
	E0819 17:30:22.273433       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x7m6m\": pod busybox-7dff88458-x7m6m is already assigned to node \"ha-431000\"" pod="default/busybox-7dff88458-x7m6m"
	I0819 17:30:22.273582       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x7m6m" node="ha-431000"
	E0819 17:42:29.626807       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kcrzx\": pod kindnet-kcrzx is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kcrzx" node="ha-431000-m04"
	E0819 17:42:29.626857       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4d8e74ea-456c-476b-951f-c880eb642788(kube-system/kindnet-kcrzx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kcrzx"
	E0819 17:42:29.626868       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kcrzx\": pod kindnet-kcrzx is already assigned to node \"ha-431000-m04\"" pod="kube-system/kindnet-kcrzx"
	I0819 17:42:29.626879       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kcrzx" node="ha-431000-m04"
	E0819 17:42:29.628487       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2fn5w\": pod kube-proxy-2fn5w is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2fn5w" node="ha-431000-m04"
	E0819 17:42:29.628792       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bca1b722-fe85-4f4b-a536-8228357812a4(kube-system/kube-proxy-2fn5w) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2fn5w"
	E0819 17:42:29.628962       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2fn5w\": pod kube-proxy-2fn5w is already assigned to node \"ha-431000-m04\"" pod="kube-system/kube-proxy-2fn5w"
	I0819 17:42:29.629175       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2fn5w" node="ha-431000-m04"
	E0819 17:42:52.562727       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wfcpq\": pod busybox-7dff88458-wfcpq is already assigned to node \"ha-431000-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wfcpq" node="ha-431000-m04"
	E0819 17:42:52.562826       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c7d1dd4a-aba7-4c8f-be2e-0dc5cdb4faf7(default/busybox-7dff88458-wfcpq) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wfcpq"
	E0819 17:42:52.562855       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wfcpq\": pod busybox-7dff88458-wfcpq is already assigned to node \"ha-431000-m04\"" pod="default/busybox-7dff88458-wfcpq"
	I0819 17:42:52.562878       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wfcpq" node="ha-431000-m04"
	
	
	==> kubelet <==
	Aug 19 17:44:41 ha-431000 kubelet[2153]: W0819 17:44:41.721146    2153 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:44:41 ha-431000 kubelet[2153]: E0819 17:44:41.722049    2153 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:44:41 ha-431000 kubelet[2153]: W0819 17:44:41.721887    2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2465": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:44:41 ha-431000 kubelet[2153]: E0819 17:44:41.722083    2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2465\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:44:41 ha-431000 kubelet[2153]: E0819 17:44:41.722219    2153 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-431000.17ed32299fbaf8bc\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-431000.17ed32299fbaf8bc  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-431000,UID:4be26ba36a583cb5cf787c7b12260cd6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-431000,},FirstTimestamp:2024-08-19 17:43:06.707646652 +0000 UTC m=+921.301345273,LastTimestamp:2024-08-19 17:43:10.714412846 +0000 UTC m=+925.308111459,Count:2,Type:Warning,EventTime:0001-01-01 00
:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-431000,}"
	Aug 19 17:44:44 ha-431000 kubelet[2153]: W0819 17:44:44.792961    2153 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2586": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:44:44 ha-431000 kubelet[2153]: E0819 17:44:44.793102    2153 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2586\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:44:44 ha-431000 kubelet[2153]: W0819 17:44:44.793202    2153 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2586": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:44:44 ha-431000 kubelet[2153]: E0819 17:44:44.793237    2153 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2586\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:44:44 ha-431000 kubelet[2153]: I0819 17:44:44.793357    2153 status_manager.go:851] "Failed to get status for pod" podUID="e68070ef-bdea-45e6-b7a8-8834534fa616" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 19 17:44:44 ha-431000 kubelet[2153]: E0819 17:44:44.793672    2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-431000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 19 17:44:45 ha-431000 kubelet[2153]: E0819 17:44:45.525888    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:44:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:44:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:44:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:44:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:44:45 ha-431000 kubelet[2153]: E0819 17:44:45.962755    2153 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd"
	Aug 19 17:44:45 ha-431000 kubelet[2153]: E0819 17:44:45.962873    2153 container_log_manager.go:307] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-431000_4be26ba36a583cb5cf787c7b12260cd6/kube-apiserver/0.log\": failed to reopen container log \"262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd" path="/var/log/pods/kube-system_kube-apiserver-ha-431000_4be26ba36a583cb5cf787c7b12260cd6/kube-apiserver/0.log" currentSize=25446507 maxSize=10485760
	Aug 19 17:44:47 ha-431000 kubelet[2153]: W0819 17:44:47.866359    2153 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2450": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:44:47 ha-431000 kubelet[2153]: E0819 17:44:47.866478    2153 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2450\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:44:47 ha-431000 kubelet[2153]: W0819 17:44:47.866572    2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2633": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:44:47 ha-431000 kubelet[2153]: E0819 17:44:47.866603    2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2633\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:44:47 ha-431000 kubelet[2153]: I0819 17:44:47.866653    2153 status_manager.go:851] "Failed to get status for pod" podUID="7c2fca8c814adb84661f46fda3b2d591" pod="kube-system/kube-vip-ha-431000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-431000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 19 17:44:47 ha-431000 kubelet[2153]: W0819 17:44:47.866968    2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2569": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:44:47 ha-431000 kubelet[2153]: E0819 17:44:47.867155    2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2569\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000: exit status 2 (16.275146192s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-431000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (49.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (93.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 node start m02 -v=7 --alsologtostderr
E0819 10:45:29.076228    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:45:43.482320    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 node start m02 -v=7 --alsologtostderr: (49.155772488s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 2 (451.688884ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:45:55.441018    6479 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:45:55.441353    6479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:45:55.441359    6479 out.go:358] Setting ErrFile to fd 2...
	I0819 10:45:55.441363    6479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:45:55.441558    6479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:45:55.441770    6479 out.go:352] Setting JSON to false
	I0819 10:45:55.441792    6479 mustload.go:65] Loading cluster: ha-431000
	I0819 10:45:55.441843    6479 notify.go:220] Checking for updates...
	I0819 10:45:55.442139    6479 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:45:55.442155    6479 status.go:255] checking status of ha-431000 ...
	I0819 10:45:55.442580    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.442642    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.453213    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51715
	I0819 10:45:55.453707    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.454258    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.454276    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.454562    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.454711    6479 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:45:55.454829    6479 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:55.454945    6479 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:45:55.455968    6479 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:45:55.455990    6479 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:45:55.456244    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.456265    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.465510    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51717
	I0819 10:45:55.465869    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.466316    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.466330    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.466563    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.466691    6479 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:45:55.466807    6479 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:45:55.467063    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.467085    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.476897    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51719
	I0819 10:45:55.477212    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.477530    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.477539    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.477736    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.477843    6479 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:45:55.478264    6479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:55.478283    6479 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:45:55.478360    6479 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:45:55.478433    6479 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:45:55.478496    6479 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:45:55.478571    6479 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:45:55.512107    6479 ssh_runner.go:195] Run: systemctl --version
	I0819 10:45:55.517010    6479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:55.528926    6479 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:45:55.528952    6479 api_server.go:166] Checking apiserver status ...
	I0819 10:45:55.529005    6479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:45:55.540617    6479 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup
	W0819 10:45:55.550470    6479 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:45:55.550538    6479 ssh_runner.go:195] Run: ls
	I0819 10:45:55.554179    6479 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:45:55.557860    6479 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:45:55.557872    6479 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:45:55.557881    6479 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:45:55.557893    6479 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:45:55.558155    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.558188    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.567605    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51723
	I0819 10:45:55.567976    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.568302    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.568314    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.568521    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.568626    6479 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:45:55.568704    6479 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:55.568792    6479 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6436
	I0819 10:45:55.569790    6479 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0819 10:45:55.569800    6479 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:45:55.570044    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.570065    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.578728    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51725
	I0819 10:45:55.579056    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.579409    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.579426    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.579630    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.579728    6479 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:45:55.579809    6479 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:45:55.580055    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.580077    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.588500    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51727
	I0819 10:45:55.588818    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.589157    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.589182    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.589372    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.589466    6479 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:45:55.589602    6479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:55.589620    6479 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:45:55.589705    6479 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:45:55.589801    6479 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:45:55.589872    6479 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:45:55.589947    6479 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:45:55.626154    6479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:55.637104    6479 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:45:55.637117    6479 api_server.go:166] Checking apiserver status ...
	I0819 10:45:55.637154    6479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:45:55.649055    6479 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup
	W0819 10:45:55.657258    6479 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:45:55.657312    6479 ssh_runner.go:195] Run: ls
	I0819 10:45:55.660827    6479 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:45:55.664020    6479 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:45:55.664033    6479 status.go:422] ha-431000-m02 apiserver status = Running (err=<nil>)
	I0819 10:45:55.664042    6479 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:45:55.664052    6479 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:45:55.664349    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.664372    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.673335    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51731
	I0819 10:45:55.673686    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.674026    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.674040    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.674271    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.674377    6479 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:45:55.674453    6479 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:55.674539    6479 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:45:55.675522    6479 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:45:55.675531    6479 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:45:55.675784    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.675812    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.684419    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51733
	I0819 10:45:55.684756    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.685126    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.685141    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.685357    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.685468    6479 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:45:55.685560    6479 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:45:55.685809    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.685832    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.694507    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51735
	I0819 10:45:55.694863    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.695263    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.695275    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.695547    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.695692    6479 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:45:55.695853    6479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:55.695865    6479 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:45:55.695989    6479 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:45:55.696180    6479 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:45:55.696307    6479 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:45:55.696406    6479 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:45:55.733106    6479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:55.743742    6479 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:45:55.743756    6479 api_server.go:166] Checking apiserver status ...
	I0819 10:45:55.743800    6479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:45:55.753722    6479 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:45:55.753733    6479 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:45:55.753748    6479 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:45:55.753758    6479 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:45:55.754039    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.754075    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.762918    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51738
	I0819 10:45:55.763270    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.763621    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.763632    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.763834    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.763956    6479 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:45:55.764051    6479 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:55.764137    6479 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:45:55.765163    6479 status.go:330] ha-431000-m04 host status = "Running" (err=<nil>)
	I0819 10:45:55.765174    6479 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:45:55.765432    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.765456    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.774096    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51740
	I0819 10:45:55.774436    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.774749    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.774763    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.774966    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.775083    6479 main.go:141] libmachine: (ha-431000-m04) Calling .GetIP
	I0819 10:45:55.775177    6479 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:45:55.775438    6479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:55.775459    6479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:55.783921    6479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51742
	I0819 10:45:55.784264    6479 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:55.784620    6479 main.go:141] libmachine: Using API Version  1
	I0819 10:45:55.784637    6479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:55.784857    6479 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:55.784967    6479 main.go:141] libmachine: (ha-431000-m04) Calling .DriverName
	I0819 10:45:55.785095    6479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:55.785106    6479 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHHostname
	I0819 10:45:55.785188    6479 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHPort
	I0819 10:45:55.785267    6479 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHKeyPath
	I0819 10:45:55.785371    6479 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHUsername
	I0819 10:45:55.785445    6479 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m04/id_rsa Username:docker}
	I0819 10:45:55.816553    6479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:55.828524    6479 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 2 (459.839679ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:45:56.412014    6493 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:45:56.412380    6493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:45:56.412388    6493 out.go:358] Setting ErrFile to fd 2...
	I0819 10:45:56.412393    6493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:45:56.412662    6493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:45:56.412950    6493 out.go:352] Setting JSON to false
	I0819 10:45:56.412978    6493 mustload.go:65] Loading cluster: ha-431000
	I0819 10:45:56.413032    6493 notify.go:220] Checking for updates...
	I0819 10:45:56.413404    6493 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:45:56.413427    6493 status.go:255] checking status of ha-431000 ...
	I0819 10:45:56.413965    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.414030    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.423960    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51746
	I0819 10:45:56.424396    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.424824    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.424834    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.425067    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.425185    6493 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:45:56.425273    6493 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:56.425351    6493 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:45:56.426374    6493 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:45:56.426392    6493 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:45:56.426659    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.426682    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.435079    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51748
	I0819 10:45:56.435411    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.435801    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.435821    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.436040    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.436156    6493 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:45:56.436256    6493 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:45:56.436521    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.436552    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.450215    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51750
	I0819 10:45:56.450661    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.451002    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.451016    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.451255    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.451380    6493 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:45:56.451546    6493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:56.451566    6493 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:45:56.451648    6493 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:45:56.451724    6493 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:45:56.451836    6493 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:45:56.451930    6493 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:45:56.482698    6493 ssh_runner.go:195] Run: systemctl --version
	I0819 10:45:56.486925    6493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:56.503737    6493 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:45:56.503767    6493 api_server.go:166] Checking apiserver status ...
	I0819 10:45:56.503820    6493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:45:56.518922    6493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup
	W0819 10:45:56.527078    6493 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:45:56.527135    6493 ssh_runner.go:195] Run: ls
	I0819 10:45:56.530263    6493 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:45:56.535127    6493 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:45:56.535138    6493 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:45:56.535147    6493 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:45:56.535158    6493 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:45:56.535424    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.535445    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.544317    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51754
	I0819 10:45:56.544691    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.545078    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.545097    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.545346    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.545466    6493 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:45:56.545583    6493 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:56.545707    6493 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6436
	I0819 10:45:56.547156    6493 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0819 10:45:56.547170    6493 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:45:56.547557    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.547593    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.557095    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51756
	I0819 10:45:56.557503    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.557920    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.557937    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.558176    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.558307    6493 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:45:56.558407    6493 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:45:56.558691    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.558714    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.568173    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51758
	I0819 10:45:56.568550    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.568881    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.568895    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.569099    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.569202    6493 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:45:56.569333    6493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:56.569352    6493 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:45:56.569439    6493 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:45:56.569543    6493 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:45:56.569623    6493 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:45:56.569703    6493 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:45:56.605655    6493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:56.618089    6493 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:45:56.618107    6493 api_server.go:166] Checking apiserver status ...
	I0819 10:45:56.618150    6493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:45:56.629468    6493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup
	W0819 10:45:56.636874    6493 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:45:56.636918    6493 ssh_runner.go:195] Run: ls
	I0819 10:45:56.640099    6493 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:45:56.643946    6493 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:45:56.643960    6493 status.go:422] ha-431000-m02 apiserver status = Running (err=<nil>)
	I0819 10:45:56.643968    6493 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:45:56.643978    6493 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:45:56.644259    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.644283    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.654197    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51762
	I0819 10:45:56.654593    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.655138    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.655158    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.655401    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.655526    6493 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:45:56.655646    6493 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:56.655739    6493 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:45:56.657096    6493 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:45:56.657110    6493 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:45:56.657472    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.657509    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.668000    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51764
	I0819 10:45:56.668362    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.668677    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.668692    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.668922    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.669038    6493 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:45:56.669120    6493 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:45:56.669385    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.669411    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.677960    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51766
	I0819 10:45:56.678300    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.678620    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.678631    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.678859    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.678971    6493 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:45:56.679092    6493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:56.679103    6493 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:45:56.679185    6493 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:45:56.679285    6493 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:45:56.679401    6493 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:45:56.679480    6493 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:45:56.714848    6493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:56.725733    6493 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:45:56.725747    6493 api_server.go:166] Checking apiserver status ...
	I0819 10:45:56.725786    6493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:45:56.735710    6493 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:45:56.735720    6493 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:45:56.735730    6493 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:45:56.735739    6493 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:45:56.736010    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.736035    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.745174    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51769
	I0819 10:45:56.745528    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.745899    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.745918    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.746162    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.746290    6493 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:45:56.746380    6493 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:56.746458    6493 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:45:56.747515    6493 status.go:330] ha-431000-m04 host status = "Running" (err=<nil>)
	I0819 10:45:56.747528    6493 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:45:56.747891    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.747932    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.757603    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51771
	I0819 10:45:56.757973    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.758333    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.758346    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.758565    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.758694    6493 main.go:141] libmachine: (ha-431000-m04) Calling .GetIP
	I0819 10:45:56.758793    6493 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:45:56.759058    6493 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:56.759082    6493 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:56.767833    6493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51773
	I0819 10:45:56.768196    6493 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:56.768558    6493 main.go:141] libmachine: Using API Version  1
	I0819 10:45:56.768574    6493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:56.768815    6493 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:56.768931    6493 main.go:141] libmachine: (ha-431000-m04) Calling .DriverName
	I0819 10:45:56.769062    6493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:56.769074    6493 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHHostname
	I0819 10:45:56.769159    6493 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHPort
	I0819 10:45:56.769236    6493 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHKeyPath
	I0819 10:45:56.769322    6493 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHUsername
	I0819 10:45:56.769398    6493 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m04/id_rsa Username:docker}
	I0819 10:45:56.799757    6493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:56.811656    6493 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 2 (483.375113ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:45:58.940302    6509 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:45:58.940587    6509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:45:58.940593    6509 out.go:358] Setting ErrFile to fd 2...
	I0819 10:45:58.940597    6509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:45:58.940790    6509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:45:58.940987    6509 out.go:352] Setting JSON to false
	I0819 10:45:58.941009    6509 mustload.go:65] Loading cluster: ha-431000
	I0819 10:45:58.941057    6509 notify.go:220] Checking for updates...
	I0819 10:45:58.941335    6509 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:45:58.941357    6509 status.go:255] checking status of ha-431000 ...
	I0819 10:45:58.941791    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:58.941829    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:58.952064    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51777
	I0819 10:45:58.952485    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:58.952932    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:58.952952    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:58.953174    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:58.953299    6509 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:45:58.953390    6509 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:58.953501    6509 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:45:58.954591    6509 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:45:58.954613    6509 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:45:58.954872    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:58.954895    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:58.963753    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51779
	I0819 10:45:58.964250    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:58.964711    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:58.964729    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:58.965025    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:58.965152    6509 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:45:58.965273    6509 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:45:58.965590    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:58.965619    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:58.976945    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51781
	I0819 10:45:58.977299    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:58.977698    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:58.977720    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:58.977919    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:58.978022    6509 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:45:58.978170    6509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:58.978188    6509 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:45:58.978260    6509 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:45:58.978350    6509 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:45:58.978448    6509 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:45:58.978527    6509 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:45:59.009212    6509 ssh_runner.go:195] Run: systemctl --version
	I0819 10:45:59.014549    6509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:59.027135    6509 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:45:59.027157    6509 api_server.go:166] Checking apiserver status ...
	I0819 10:45:59.027217    6509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:45:59.039198    6509 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup
	W0819 10:45:59.049837    6509 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:45:59.049902    6509 ssh_runner.go:195] Run: ls
	I0819 10:45:59.054427    6509 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:45:59.059976    6509 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:45:59.059994    6509 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:45:59.060007    6509 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:45:59.060025    6509 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:45:59.060330    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:59.060359    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:59.071754    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51785
	I0819 10:45:59.072129    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:59.072479    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:59.072492    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:59.072699    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:59.072811    6509 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:45:59.072897    6509 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:59.072997    6509 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6436
	I0819 10:45:59.074008    6509 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0819 10:45:59.074017    6509 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:45:59.074276    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:59.074301    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:59.082773    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51787
	I0819 10:45:59.083136    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:59.083452    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:59.083470    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:59.083669    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:59.083791    6509 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:45:59.083867    6509 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:45:59.084105    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:59.084126    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:59.094366    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51789
	I0819 10:45:59.094853    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:59.095406    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:59.095428    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:59.095744    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:59.095909    6509 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:45:59.096092    6509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:59.096109    6509 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:45:59.096272    6509 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:45:59.096381    6509 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:45:59.096540    6509 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:45:59.096639    6509 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:45:59.136958    6509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:59.153602    6509 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:45:59.153622    6509 api_server.go:166] Checking apiserver status ...
	I0819 10:45:59.153671    6509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:45:59.170234    6509 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup
	W0819 10:45:59.178267    6509 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:45:59.178318    6509 ssh_runner.go:195] Run: ls
	I0819 10:45:59.181383    6509 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:45:59.184532    6509 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:45:59.184557    6509 status.go:422] ha-431000-m02 apiserver status = Running (err=<nil>)
	I0819 10:45:59.184566    6509 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:45:59.184579    6509 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:45:59.184857    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:59.184877    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:59.194581    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51793
	I0819 10:45:59.195075    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:59.195576    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:59.195592    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:59.195932    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:59.196134    6509 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:45:59.196260    6509 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:59.196391    6509 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:45:59.197924    6509 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:45:59.197941    6509 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:45:59.198354    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:59.198399    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:59.209895    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51795
	I0819 10:45:59.210381    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:59.210865    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:59.210878    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:59.211189    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:59.211345    6509 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:45:59.211462    6509 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:45:59.211893    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:59.211933    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:59.222489    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51797
	I0819 10:45:59.222854    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:59.223200    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:59.223211    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:59.223421    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:59.223536    6509 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:45:59.223672    6509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:59.223684    6509 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:45:59.223764    6509 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:45:59.223849    6509 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:45:59.223949    6509 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:45:59.224044    6509 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:45:59.261267    6509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:59.273846    6509 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:45:59.273859    6509 api_server.go:166] Checking apiserver status ...
	I0819 10:45:59.273898    6509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:45:59.283755    6509 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:45:59.283770    6509 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:45:59.283778    6509 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:45:59.283796    6509 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:45:59.284058    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:59.284081    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:59.293959    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51800
	I0819 10:45:59.294408    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:59.294847    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:59.294870    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:59.295153    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:59.295290    6509 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:45:59.295391    6509 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:45:59.295501    6509 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:45:59.296652    6509 status.go:330] ha-431000-m04 host status = "Running" (err=<nil>)
	I0819 10:45:59.296671    6509 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:45:59.297056    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:59.297100    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:59.306747    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51802
	I0819 10:45:59.307096    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:59.307438    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:59.307454    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:59.307668    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:59.307782    6509 main.go:141] libmachine: (ha-431000-m04) Calling .GetIP
	I0819 10:45:59.307921    6509 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:45:59.308268    6509 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:45:59.308299    6509 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:45:59.318718    6509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51804
	I0819 10:45:59.319048    6509 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:45:59.319412    6509 main.go:141] libmachine: Using API Version  1
	I0819 10:45:59.319429    6509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:45:59.319622    6509 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:45:59.319736    6509 main.go:141] libmachine: (ha-431000-m04) Calling .DriverName
	I0819 10:45:59.319881    6509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:45:59.319892    6509 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHHostname
	I0819 10:45:59.319977    6509 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHPort
	I0819 10:45:59.320052    6509 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHKeyPath
	I0819 10:45:59.320134    6509 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHUsername
	I0819 10:45:59.320214    6509 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m04/id_rsa Username:docker}
	I0819 10:45:59.350071    6509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:45:59.362271    6509 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 2 (477.318424ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:46:00.632977    6523 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:46:00.633267    6523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:46:00.633272    6523 out.go:358] Setting ErrFile to fd 2...
	I0819 10:46:00.633276    6523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:46:00.633451    6523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:46:00.633620    6523 out.go:352] Setting JSON to false
	I0819 10:46:00.633640    6523 mustload.go:65] Loading cluster: ha-431000
	I0819 10:46:00.633673    6523 notify.go:220] Checking for updates...
	I0819 10:46:00.633949    6523 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:46:00.633964    6523 status.go:255] checking status of ha-431000 ...
	I0819 10:46:00.634308    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.634353    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.645546    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51808
	I0819 10:46:00.646029    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.646602    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.646645    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.646947    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.647107    6523 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:46:00.647236    6523 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:00.647330    6523 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:46:00.648518    6523 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:46:00.648543    6523 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:46:00.648795    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.648819    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.659998    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51810
	I0819 10:46:00.660412    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.661047    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.661080    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.661417    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.661573    6523 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:46:00.661705    6523 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:46:00.662005    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.662059    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.672261    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51812
	I0819 10:46:00.672614    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.672925    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.672933    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.673157    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.673290    6523 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:46:00.673438    6523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:00.673457    6523 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:46:00.673548    6523 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:46:00.673631    6523 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:46:00.673710    6523 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:46:00.673790    6523 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:46:00.705973    6523 ssh_runner.go:195] Run: systemctl --version
	I0819 10:46:00.712866    6523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:00.725863    6523 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:00.725886    6523 api_server.go:166] Checking apiserver status ...
	I0819 10:46:00.725927    6523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:46:00.738434    6523 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup
	W0819 10:46:00.748433    6523 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:00.748488    6523 ssh_runner.go:195] Run: ls
	I0819 10:46:00.752287    6523 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:46:00.758253    6523 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:46:00.758284    6523 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:46:00.758296    6523 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:00.758314    6523 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:46:00.758691    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.758739    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.768889    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51816
	I0819 10:46:00.769293    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.769611    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.769621    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.769822    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.769952    6523 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:46:00.770040    6523 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:00.770141    6523 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6436
	I0819 10:46:00.771192    6523 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0819 10:46:00.771202    6523 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:46:00.771440    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.771463    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.780208    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51818
	I0819 10:46:00.780545    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.780850    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.780860    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.781090    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.781208    6523 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:46:00.781296    6523 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:46:00.781575    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.781601    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.790650    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51820
	I0819 10:46:00.791020    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.791421    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.791436    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.791670    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.791788    6523 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:46:00.791948    6523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:00.791960    6523 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:46:00.792063    6523 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:46:00.792154    6523 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:46:00.792256    6523 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:46:00.792358    6523 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:46:00.830548    6523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:00.841871    6523 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:00.841887    6523 api_server.go:166] Checking apiserver status ...
	I0819 10:46:00.841930    6523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:46:00.855743    6523 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup
	W0819 10:46:00.864337    6523 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:00.864387    6523 ssh_runner.go:195] Run: ls
	I0819 10:46:00.868382    6523 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:46:00.871591    6523 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:46:00.871603    6523 status.go:422] ha-431000-m02 apiserver status = Running (err=<nil>)
	I0819 10:46:00.871611    6523 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:00.871620    6523 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:46:00.871882    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.871903    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.880695    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51824
	I0819 10:46:00.881045    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.881371    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.881382    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.881582    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.881700    6523 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:46:00.881783    6523 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:00.881864    6523 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:46:00.882931    6523 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:46:00.882941    6523 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:46:00.883191    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.883224    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.892742    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51826
	I0819 10:46:00.893143    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.893521    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.893536    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.893751    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.893872    6523 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:46:00.893974    6523 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:46:00.894253    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.894281    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.904489    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51828
	I0819 10:46:00.904898    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.905348    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.905362    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.905649    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.905801    6523 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:46:00.905966    6523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:00.905978    6523 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:46:00.906093    6523 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:46:00.906196    6523 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:46:00.906292    6523 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:46:00.906387    6523 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:46:00.941316    6523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:00.953301    6523 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:00.953321    6523 api_server.go:166] Checking apiserver status ...
	I0819 10:46:00.953372    6523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:46:00.964787    6523 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:00.964802    6523 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:46:00.964817    6523 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:00.964832    6523 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:46:00.965246    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.965279    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.974288    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51831
	I0819 10:46:00.974645    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.974968    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.974979    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.975190    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.975304    6523 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:46:00.975389    6523 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:00.975467    6523 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:46:00.976529    6523 status.go:330] ha-431000-m04 host status = "Running" (err=<nil>)
	I0819 10:46:00.976540    6523 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:46:00.976778    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.976800    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.985636    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51833
	I0819 10:46:00.985967    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.986282    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.986294    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.986565    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.986712    6523 main.go:141] libmachine: (ha-431000-m04) Calling .GetIP
	I0819 10:46:00.986849    6523 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:46:00.987144    6523 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:00.987170    6523 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:00.996741    6523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51835
	I0819 10:46:00.997120    6523 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:00.997460    6523 main.go:141] libmachine: Using API Version  1
	I0819 10:46:00.997472    6523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:00.997683    6523 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:00.997804    6523 main.go:141] libmachine: (ha-431000-m04) Calling .DriverName
	I0819 10:46:00.997943    6523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:00.997962    6523 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHHostname
	I0819 10:46:00.998057    6523 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHPort
	I0819 10:46:00.998154    6523 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHKeyPath
	I0819 10:46:00.998269    6523 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHUsername
	I0819 10:46:00.998363    6523 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m04/id_rsa Username:docker}
	I0819 10:46:01.028570    6523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:01.039833    6523 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 2 (523.60751ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:46:03.172640    6541 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:46:03.172929    6541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:46:03.172934    6541 out.go:358] Setting ErrFile to fd 2...
	I0819 10:46:03.172938    6541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:46:03.173128    6541 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:46:03.173310    6541 out.go:352] Setting JSON to false
	I0819 10:46:03.173331    6541 mustload.go:65] Loading cluster: ha-431000
	I0819 10:46:03.173375    6541 notify.go:220] Checking for updates...
	I0819 10:46:03.173673    6541 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:46:03.173689    6541 status.go:255] checking status of ha-431000 ...
	I0819 10:46:03.174065    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.174133    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.183285    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51839
	I0819 10:46:03.183649    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.184346    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.184415    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.184744    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.184914    6541 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:46:03.185097    6541 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:03.185342    6541 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:46:03.188116    6541 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:46:03.188146    6541 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:46:03.188527    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.188558    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.200447    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51841
	I0819 10:46:03.200955    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.201505    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.201533    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.201845    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.202010    6541 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:46:03.202138    6541 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:46:03.202561    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.202601    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.215034    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51843
	I0819 10:46:03.215523    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.216031    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.216047    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.216383    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.216555    6541 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:46:03.216786    6541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:03.216823    6541 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:46:03.216944    6541 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:46:03.217055    6541 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:46:03.217182    6541 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:46:03.217316    6541 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:46:03.254675    6541 ssh_runner.go:195] Run: systemctl --version
	I0819 10:46:03.261806    6541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:03.275145    6541 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:03.275170    6541 api_server.go:166] Checking apiserver status ...
	I0819 10:46:03.275205    6541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:46:03.288991    6541 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup
	W0819 10:46:03.299185    6541 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:03.299259    6541 ssh_runner.go:195] Run: ls
	I0819 10:46:03.304322    6541 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:46:03.310153    6541 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:46:03.310174    6541 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:46:03.310188    6541 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:03.310204    6541 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:46:03.310592    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.310629    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.321194    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51847
	I0819 10:46:03.321562    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.321886    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.321897    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.322126    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.322246    6541 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:46:03.322338    6541 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:03.322417    6541 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6436
	I0819 10:46:03.323502    6541 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0819 10:46:03.323514    6541 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:46:03.323777    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.323808    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.332876    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51849
	I0819 10:46:03.333231    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.333586    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.333601    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.333810    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.333967    6541 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:46:03.334104    6541 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:46:03.334481    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.334522    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.347391    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51851
	I0819 10:46:03.347855    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.348358    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.348380    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.348718    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.348895    6541 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:46:03.349094    6541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:03.349112    6541 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:46:03.349246    6541 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:46:03.349373    6541 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:46:03.349494    6541 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:46:03.349618    6541 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:46:03.393102    6541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:03.407934    6541 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:03.407950    6541 api_server.go:166] Checking apiserver status ...
	I0819 10:46:03.407988    6541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:46:03.424849    6541 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup
	W0819 10:46:03.433037    6541 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:03.433096    6541 ssh_runner.go:195] Run: ls
	I0819 10:46:03.436797    6541 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:46:03.442638    6541 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:46:03.442659    6541 status.go:422] ha-431000-m02 apiserver status = Running (err=<nil>)
	I0819 10:46:03.442673    6541 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:03.442690    6541 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:46:03.443009    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.443043    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.452999    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51855
	I0819 10:46:03.453457    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.453857    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.453875    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.454131    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.454280    6541 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:46:03.454397    6541 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:03.454533    6541 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:46:03.456001    6541 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:46:03.456019    6541 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:46:03.456411    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.456450    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.466505    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51857
	I0819 10:46:03.466870    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.467198    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.467212    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.467446    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.467577    6541 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:46:03.467665    6541 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:46:03.467939    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.467965    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.476733    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51859
	I0819 10:46:03.477077    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.477450    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.477465    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.477672    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.477776    6541 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:46:03.477898    6541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:03.477910    6541 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:46:03.478002    6541 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:46:03.478082    6541 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:46:03.478176    6541 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:46:03.478247    6541 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:46:03.516366    6541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:03.528559    6541 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:03.528579    6541 api_server.go:166] Checking apiserver status ...
	I0819 10:46:03.528624    6541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:46:03.540394    6541 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:03.540409    6541 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:46:03.540422    6541 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:03.540435    6541 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:46:03.540853    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.540882    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.552552    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51862
	I0819 10:46:03.553011    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.553504    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.553531    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.553908    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.554072    6541 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:46:03.554208    6541 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:03.554337    6541 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:46:03.555928    6541 status.go:330] ha-431000-m04 host status = "Running" (err=<nil>)
	I0819 10:46:03.555940    6541 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:46:03.556415    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.556450    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.567619    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51864
	I0819 10:46:03.568049    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.568468    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.568483    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.568698    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.568824    6541 main.go:141] libmachine: (ha-431000-m04) Calling .GetIP
	I0819 10:46:03.568919    6541 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:46:03.569168    6541 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:03.569189    6541 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:03.577796    6541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51866
	I0819 10:46:03.578179    6541 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:03.578498    6541 main.go:141] libmachine: Using API Version  1
	I0819 10:46:03.578509    6541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:03.578749    6541 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:03.578869    6541 main.go:141] libmachine: (ha-431000-m04) Calling .DriverName
	I0819 10:46:03.578995    6541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:03.579005    6541 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHHostname
	I0819 10:46:03.579105    6541 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHPort
	I0819 10:46:03.579193    6541 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHKeyPath
	I0819 10:46:03.579266    6541 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHUsername
	I0819 10:46:03.579347    6541 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m04/id_rsa Username:docker}
	I0819 10:46:03.610883    6541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:03.623932    6541 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 2 (528.123935ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:46:09.034850    6570 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:46:09.035841    6570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:46:09.035851    6570 out.go:358] Setting ErrFile to fd 2...
	I0819 10:46:09.035858    6570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:46:09.036534    6570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:46:09.036750    6570 out.go:352] Setting JSON to false
	I0819 10:46:09.036774    6570 mustload.go:65] Loading cluster: ha-431000
	I0819 10:46:09.036823    6570 notify.go:220] Checking for updates...
	I0819 10:46:09.037083    6570 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:46:09.037101    6570 status.go:255] checking status of ha-431000 ...
	I0819 10:46:09.037435    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.037480    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.048641    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51870
	I0819 10:46:09.049118    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.049533    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.049543    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.049792    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.049918    6570 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:46:09.050018    6570 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:09.050106    6570 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:46:09.051236    6570 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:46:09.051256    6570 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:46:09.051526    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.051550    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.061364    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51872
	I0819 10:46:09.061900    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.062329    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.062355    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.062711    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.062881    6570 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:46:09.062999    6570 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:46:09.063266    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.063297    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.073309    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51874
	I0819 10:46:09.073651    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.074017    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.074036    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.074267    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.074392    6570 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:46:09.074540    6570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:09.074560    6570 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:46:09.074653    6570 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:46:09.074761    6570 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:46:09.074858    6570 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:46:09.074945    6570 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:46:09.108556    6570 ssh_runner.go:195] Run: systemctl --version
	I0819 10:46:09.114265    6570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:09.131656    6570 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:09.131679    6570 api_server.go:166] Checking apiserver status ...
	I0819 10:46:09.131717    6570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:46:09.147154    6570 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup
	W0819 10:46:09.155942    6570 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:09.156033    6570 ssh_runner.go:195] Run: ls
	I0819 10:46:09.160378    6570 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:46:09.165838    6570 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:46:09.165854    6570 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:46:09.165864    6570 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:09.165884    6570 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:46:09.166156    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.166179    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.175410    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51878
	I0819 10:46:09.175851    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.176201    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.176211    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.176435    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.176560    6570 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:46:09.176652    6570 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:09.176762    6570 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6436
	I0819 10:46:09.177855    6570 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0819 10:46:09.177865    6570 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:46:09.178122    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.178150    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.187990    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51880
	I0819 10:46:09.188451    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.188894    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.188912    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.189185    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.189329    6570 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:46:09.189430    6570 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:46:09.189723    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.189747    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.199533    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51882
	I0819 10:46:09.200026    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.200418    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.200430    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.200724    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.200895    6570 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:46:09.201084    6570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:09.201099    6570 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:46:09.201245    6570 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:46:09.201367    6570 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:46:09.201485    6570 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:46:09.201595    6570 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:46:09.242758    6570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:09.257358    6570 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:09.257376    6570 api_server.go:166] Checking apiserver status ...
	I0819 10:46:09.257422    6570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:46:09.276030    6570 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup
	W0819 10:46:09.288262    6570 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:09.288323    6570 ssh_runner.go:195] Run: ls
	I0819 10:46:09.292642    6570 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:46:09.297488    6570 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:46:09.297504    6570 status.go:422] ha-431000-m02 apiserver status = Running (err=<nil>)
	I0819 10:46:09.297513    6570 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:09.297522    6570 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:46:09.297796    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.297822    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.308341    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51886
	I0819 10:46:09.308756    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.309223    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.309247    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.309601    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.309756    6570 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:46:09.309882    6570 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:09.310007    6570 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:46:09.311125    6570 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:46:09.311135    6570 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:46:09.311410    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.311446    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.321543    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51888
	I0819 10:46:09.321934    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.322321    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.322338    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.322526    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.322625    6570 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:46:09.322721    6570 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:46:09.323001    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.323027    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.332384    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51890
	I0819 10:46:09.332856    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.333254    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.333267    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.333602    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.333752    6570 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:46:09.333942    6570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:09.333955    6570 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:46:09.334057    6570 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:46:09.334180    6570 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:46:09.334316    6570 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:46:09.334435    6570 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:46:09.374098    6570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:09.388917    6570 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:09.388946    6570 api_server.go:166] Checking apiserver status ...
	I0819 10:46:09.388997    6570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:46:09.402679    6570 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:09.402693    6570 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:46:09.402702    6570 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:09.402734    6570 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:46:09.403029    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.403058    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.415380    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51893
	I0819 10:46:09.415840    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.416336    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.416364    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.416687    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.416840    6570 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:46:09.416973    6570 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:09.417125    6570 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:46:09.418592    6570 status.go:330] ha-431000-m04 host status = "Running" (err=<nil>)
	I0819 10:46:09.418605    6570 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:46:09.418892    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.418920    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.428667    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51895
	I0819 10:46:09.429080    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.429482    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.429499    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.429735    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.429859    6570 main.go:141] libmachine: (ha-431000-m04) Calling .GetIP
	I0819 10:46:09.429961    6570 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:46:09.430253    6570 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:09.430278    6570 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:09.440473    6570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51897
	I0819 10:46:09.440956    6570 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:09.441405    6570 main.go:141] libmachine: Using API Version  1
	I0819 10:46:09.441425    6570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:09.441666    6570 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:09.441812    6570 main.go:141] libmachine: (ha-431000-m04) Calling .DriverName
	I0819 10:46:09.441987    6570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:09.442003    6570 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHHostname
	I0819 10:46:09.442120    6570 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHPort
	I0819 10:46:09.442276    6570 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHKeyPath
	I0819 10:46:09.442414    6570 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHUsername
	I0819 10:46:09.442518    6570 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m04/id_rsa Username:docker}
	I0819 10:46:09.473952    6570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:09.499018    6570 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 2 (542.613491ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:46:17.073273    6595 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:46:17.074078    6595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:46:17.074087    6595 out.go:358] Setting ErrFile to fd 2...
	I0819 10:46:17.074093    6595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:46:17.074637    6595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:46:17.074873    6595 out.go:352] Setting JSON to false
	I0819 10:46:17.074895    6595 mustload.go:65] Loading cluster: ha-431000
	I0819 10:46:17.074941    6595 notify.go:220] Checking for updates...
	I0819 10:46:17.075201    6595 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:46:17.075218    6595 status.go:255] checking status of ha-431000 ...
	I0819 10:46:17.075594    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.075642    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.085438    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51901
	I0819 10:46:17.085790    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.086192    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.086211    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.086449    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.086573    6595 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:46:17.086655    6595 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:17.086796    6595 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:46:17.088417    6595 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:46:17.088494    6595 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:46:17.089317    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.089390    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.101854    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51903
	I0819 10:46:17.102508    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.103367    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.103431    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.104179    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.104369    6595 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:46:17.104533    6595 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:46:17.104900    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.104935    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.116402    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51905
	I0819 10:46:17.116982    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.117382    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.117401    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.117675    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.117852    6595 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:46:17.118047    6595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:17.118065    6595 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:46:17.118187    6595 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:46:17.118293    6595 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:46:17.118414    6595 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:46:17.118540    6595 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:46:17.154399    6595 ssh_runner.go:195] Run: systemctl --version
	I0819 10:46:17.159454    6595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:17.175388    6595 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:17.175412    6595 api_server.go:166] Checking apiserver status ...
	I0819 10:46:17.175463    6595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:46:17.192869    6595 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup
	W0819 10:46:17.202000    6595 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:17.202078    6595 ssh_runner.go:195] Run: ls
	I0819 10:46:17.206597    6595 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:46:17.211731    6595 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:46:17.211748    6595 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:46:17.211758    6595 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:17.211771    6595 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:46:17.212076    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.212103    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.221728    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51909
	I0819 10:46:17.222130    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.222559    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.222576    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.222856    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.223022    6595 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:46:17.223134    6595 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:17.223255    6595 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6436
	I0819 10:46:17.224372    6595 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0819 10:46:17.224385    6595 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:46:17.224660    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.224686    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.234235    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51911
	I0819 10:46:17.234664    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.235054    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.235075    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.235324    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.235459    6595 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:46:17.235562    6595 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:46:17.235877    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.235909    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.246120    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51913
	I0819 10:46:17.246512    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.246929    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.246963    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.247270    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.247414    6595 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:46:17.247616    6595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:17.247628    6595 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:46:17.247736    6595 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:46:17.247847    6595 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:46:17.247966    6595 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:46:17.248072    6595 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:46:17.298284    6595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:17.316025    6595 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:17.316043    6595 api_server.go:166] Checking apiserver status ...
	I0819 10:46:17.316086    6595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:46:17.341883    6595 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup
	W0819 10:46:17.356395    6595 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:17.356488    6595 ssh_runner.go:195] Run: ls
	I0819 10:46:17.360462    6595 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:46:17.366743    6595 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:46:17.366761    6595 status.go:422] ha-431000-m02 apiserver status = Running (err=<nil>)
	I0819 10:46:17.366779    6595 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:17.366791    6595 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:46:17.367266    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.367306    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.377532    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51917
	I0819 10:46:17.378077    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.378496    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.378513    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.378742    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.378848    6595 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:46:17.378968    6595 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:17.379074    6595 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:46:17.380182    6595 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:46:17.380196    6595 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:46:17.380492    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.380533    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.390059    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51919
	I0819 10:46:17.390474    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.390834    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.390844    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.391079    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.391201    6595 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:46:17.391300    6595 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:46:17.391585    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.391608    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.400993    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51921
	I0819 10:46:17.401365    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.401758    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.401772    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.402004    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.402114    6595 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:46:17.402265    6595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:17.402277    6595 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:46:17.402357    6595 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:46:17.402441    6595 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:46:17.402529    6595 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:46:17.402626    6595 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:46:17.438253    6595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:17.452031    6595 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:17.452049    6595 api_server.go:166] Checking apiserver status ...
	I0819 10:46:17.452098    6595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:46:17.464094    6595 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:17.464110    6595 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:46:17.464123    6595 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:17.464141    6595 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:46:17.464536    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.464576    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.475181    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51924
	I0819 10:46:17.475554    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.475939    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.475955    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.476199    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.476330    6595 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:46:17.476435    6595 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:17.476543    6595 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:46:17.477646    6595 status.go:330] ha-431000-m04 host status = "Running" (err=<nil>)
	I0819 10:46:17.477663    6595 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:46:17.477966    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.477997    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.487230    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51926
	I0819 10:46:17.487685    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.488086    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.488105    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.488339    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.488458    6595 main.go:141] libmachine: (ha-431000-m04) Calling .GetIP
	I0819 10:46:17.488548    6595 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:46:17.488834    6595 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:17.488868    6595 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:17.499451    6595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51928
	I0819 10:46:17.499951    6595 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:17.500470    6595 main.go:141] libmachine: Using API Version  1
	I0819 10:46:17.500485    6595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:17.500736    6595 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:17.500859    6595 main.go:141] libmachine: (ha-431000-m04) Calling .DriverName
	I0819 10:46:17.501005    6595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:17.501018    6595 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHHostname
	I0819 10:46:17.501145    6595 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHPort
	I0819 10:46:17.501247    6595 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHKeyPath
	I0819 10:46:17.501366    6595 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHUsername
	I0819 10:46:17.501482    6595 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m04/id_rsa Username:docker}
	I0819 10:46:17.533886    6595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:17.546130    6595 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 2 (573.381297ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:46:33.372865    6621 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:46:33.373162    6621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:46:33.373170    6621 out.go:358] Setting ErrFile to fd 2...
	I0819 10:46:33.373174    6621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:46:33.373385    6621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:46:33.373597    6621 out.go:352] Setting JSON to false
	I0819 10:46:33.373620    6621 mustload.go:65] Loading cluster: ha-431000
	I0819 10:46:33.373719    6621 notify.go:220] Checking for updates...
	I0819 10:46:33.373973    6621 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:46:33.373992    6621 status.go:255] checking status of ha-431000 ...
	I0819 10:46:33.374360    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.374408    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.383914    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51932
	I0819 10:46:33.384248    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.384652    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.384679    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.384915    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.385029    6621 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:46:33.385121    6621 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:33.385200    6621 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:46:33.386202    6621 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:46:33.386223    6621 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:46:33.386488    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.386509    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.396042    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51934
	I0819 10:46:33.396496    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.396997    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.397016    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.397396    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.397576    6621 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:46:33.397698    6621 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:46:33.397980    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.398017    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.414804    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51936
	I0819 10:46:33.415396    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.415930    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.415953    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.416276    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.416445    6621 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:46:33.416657    6621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:33.416690    6621 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:46:33.416824    6621 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:46:33.416983    6621 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:46:33.417104    6621 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:46:33.417236    6621 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:46:33.455435    6621 ssh_runner.go:195] Run: systemctl --version
	I0819 10:46:33.461255    6621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:33.484250    6621 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:33.484275    6621 api_server.go:166] Checking apiserver status ...
	I0819 10:46:33.484318    6621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:46:33.497563    6621 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup
	W0819 10:46:33.507677    6621 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:33.507773    6621 ssh_runner.go:195] Run: ls
	I0819 10:46:33.512633    6621 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:46:33.519392    6621 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:46:33.519411    6621 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:46:33.519422    6621 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:33.519443    6621 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:46:33.519734    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.519762    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.530097    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51940
	I0819 10:46:33.530437    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.530777    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.530791    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.531006    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.531115    6621 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:46:33.531190    6621 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:33.531282    6621 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6436
	I0819 10:46:33.532270    6621 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0819 10:46:33.532279    6621 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:46:33.532552    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.532575    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.541730    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51942
	I0819 10:46:33.542257    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.542776    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.542794    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.543101    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.543275    6621 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:46:33.543397    6621 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:46:33.543824    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.543864    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.557659    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51944
	I0819 10:46:33.558139    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.558620    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.558643    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.558894    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.559055    6621 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:46:33.559243    6621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:33.559271    6621 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:46:33.559400    6621 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:46:33.559533    6621 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:46:33.559665    6621 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:46:33.559788    6621 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:46:33.623724    6621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:33.637238    6621 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:33.637252    6621 api_server.go:166] Checking apiserver status ...
	I0819 10:46:33.637291    6621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:46:33.652263    6621 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup
	W0819 10:46:33.669627    6621 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2061/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:33.669707    6621 ssh_runner.go:195] Run: ls
	I0819 10:46:33.673514    6621 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:46:33.681823    6621 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:46:33.681841    6621 status.go:422] ha-431000-m02 apiserver status = Running (err=<nil>)
	I0819 10:46:33.681850    6621 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:33.681860    6621 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:46:33.682149    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.682175    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.692616    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51948
	I0819 10:46:33.693138    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.693699    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.693718    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.694071    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.694264    6621 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:46:33.694407    6621 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:33.694555    6621 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:46:33.696228    6621 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:46:33.696244    6621 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:46:33.696724    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.696776    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.710775    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51950
	I0819 10:46:33.711310    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.711859    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.711876    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.712216    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.712410    6621 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:46:33.712555    6621 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:46:33.713025    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.713060    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.726552    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51952
	I0819 10:46:33.726964    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.727380    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.727393    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.727661    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.727800    6621 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:46:33.728004    6621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:33.728021    6621 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:46:33.728112    6621 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:46:33.728191    6621 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:46:33.728353    6621 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:46:33.728448    6621 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:46:33.766121    6621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:33.779730    6621 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:46:33.779744    6621 api_server.go:166] Checking apiserver status ...
	I0819 10:46:33.779783    6621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:46:33.789492    6621 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:46:33.789503    6621 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:46:33.789513    6621 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:46:33.789524    6621 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:46:33.789802    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.789826    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.800577    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51955
	I0819 10:46:33.800984    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.801349    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.801360    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.801635    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.801788    6621 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:46:33.801886    6621 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:46:33.801990    6621 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:46:33.803106    6621 status.go:330] ha-431000-m04 host status = "Running" (err=<nil>)
	I0819 10:46:33.803117    6621 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:46:33.803392    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.803418    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.816355    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51957
	I0819 10:46:33.816938    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.817453    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.817477    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.817795    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.817972    6621 main.go:141] libmachine: (ha-431000-m04) Calling .GetIP
	I0819 10:46:33.818110    6621 host.go:66] Checking if "ha-431000-m04" exists ...
	I0819 10:46:33.818585    6621 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:46:33.818624    6621 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:46:33.828847    6621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51959
	I0819 10:46:33.829297    6621 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:46:33.829726    6621 main.go:141] libmachine: Using API Version  1
	I0819 10:46:33.829744    6621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:46:33.829985    6621 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:46:33.830124    6621 main.go:141] libmachine: (ha-431000-m04) Calling .DriverName
	I0819 10:46:33.830316    6621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:46:33.830327    6621 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHHostname
	I0819 10:46:33.830454    6621 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHPort
	I0819 10:46:33.830569    6621 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHKeyPath
	I0819 10:46:33.830680    6621 main.go:141] libmachine: (ha-431000-m04) Calling .GetSSHUsername
	I0819 10:46:33.830818    6621 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m04/id_rsa Username:docker}
	I0819 10:46:33.865682    6621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:46:33.881026    6621 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (4.587900573s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:40 PDT | 19 Aug 24 10:40 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT |                     |
	|         | busybox-7dff88458-wfcpq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-431000 -v=7                | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-431000 node stop m02 -v=7         | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:43 PDT | 19 Aug 24 10:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-431000 node start m02 -v=7        | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:45 PDT | 19 Aug 24 10:45 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:27:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:27:09.441458    4789 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:27:09.441716    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441721    4789 out.go:358] Setting ErrFile to fd 2...
	I0819 10:27:09.441725    4789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:27:09.441914    4789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:27:09.443405    4789 out.go:352] Setting JSON to false
	I0819 10:27:09.468451    4789 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3399,"bootTime":1724085030,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:27:09.468547    4789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:27:09.554597    4789 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:27:09.577770    4789 notify.go:220] Checking for updates...
	I0819 10:27:09.609734    4789 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:27:09.676944    4789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:09.699980    4789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:27:09.722951    4789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:27:09.744804    4789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.765726    4789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:27:09.787204    4789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:27:09.817679    4789 out.go:177] * Using the hyperkit driver based on user configuration
	I0819 10:27:09.859821    4789 start.go:297] selected driver: hyperkit
	I0819 10:27:09.859849    4789 start.go:901] validating driver "hyperkit" against <nil>
	I0819 10:27:09.859893    4789 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:27:09.864287    4789 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.864395    4789 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:27:09.872759    4789 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:27:09.876743    4789 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.876768    4789 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:27:09.876803    4789 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 10:27:09.877011    4789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:27:09.877072    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:09.877082    4789 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 10:27:09.877094    4789 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 10:27:09.877164    4789 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:09.877251    4789 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:27:09.919755    4789 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:27:09.940604    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:09.940675    4789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:27:09.940720    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:09.940918    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:09.940931    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:09.941271    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:09.941299    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json: {Name:mkf9dcbb24d8b9fbe62d81f81a7a87fec457d2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:09.941835    4789 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:09.941963    4789 start.go:364] duration metric: took 95.166µs to acquireMachinesLock for "ha-431000"
	I0819 10:27:09.941997    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:09.942082    4789 start.go:125] createHost starting for "" (driver="hyperkit")
	I0819 10:27:09.963791    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:09.964075    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:09.964148    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:09.974068    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51111
	I0819 10:27:09.974512    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:09.974919    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:09.974932    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:09.975172    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:09.975283    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:09.975374    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:09.975471    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:09.975492    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:09.975527    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:09.975578    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975594    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975657    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:09.975695    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:09.975707    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:09.975719    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:09.975729    4789 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0819 10:27:09.975800    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.975970    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:09.976388    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:09.976397    4789 main.go:141] libmachine: (ha-431000) Calling .Create
	I0819 10:27:09.976462    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:09.976580    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:09.976459    4799 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:09.976633    4789 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:10.160305    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.160220    4799 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa...
	I0819 10:27:10.258779    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.258678    4799 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0819 10:27:10.258792    4789 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0819 10:27:10.258800    4789 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0819 10:27:10.259681    4789 main.go:141] libmachine: (ha-431000) DBG | I0819 10:27:10.259588    4799 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000 ...
	I0819 10:27:10.634434    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.634476    4789 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:27:10.634529    4789 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:27:10.744945    4789 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:27:10.744966    4789 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:10.744993    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745030    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:10.745065    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:10.745094    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:10.745118    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:10.748020    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 DEBUG: hyperkit: Pid is 4802
	I0819 10:27:10.748404    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:27:10.748413    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:10.748494    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:10.749357    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:10.749398    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:10.749412    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:10.749423    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:10.749431    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:10.755634    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:10.806699    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:10.807300    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:10.807314    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:10.807322    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:10.807335    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.184562    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:11.184575    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:11.299194    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:11.299213    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:11.299228    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:11.299236    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:11.300075    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:11.300086    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:12.750038    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0819 10:27:12.750054    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:12.750189    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:12.750969    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:12.751019    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:12.751030    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:12.751039    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:12.751052    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:14.752158    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0819 10:27:14.752174    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:14.752264    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:14.753040    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:14.753090    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:14.753102    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:14.753111    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:14.753117    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.754325    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0819 10:27:16.754340    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:16.754402    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:16.755326    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:16.755347    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:16.755354    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:16.755373    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:16.755390    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:16.856153    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:27:16.856252    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:27:16.856262    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:27:16.880804    4789 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:27:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:27:18.757489    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0819 10:27:18.757504    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:18.757601    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:18.758394    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:18.758435    4789 main.go:141] libmachine: (ha-431000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0819 10:27:18.758449    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:18.758481    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:18.758495    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:20.758927    4789 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0819 10:27:20.758946    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.759035    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.759848    4789 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:27:20.759873    4789 main.go:141] libmachine: (ha-431000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:20.759888    4789 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:20.759901    4789 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:27:20.759913    4789 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:27:20.759952    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:20.760523    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760634    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:20.760741    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:27:20.760753    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:20.760839    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:20.760885    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:20.761678    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:27:20.761690    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:27:20.761696    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:27:20.761702    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:20.761795    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:20.761883    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.761969    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:20.762060    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:20.762168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:20.762361    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:20.762369    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:27:21.818394    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.818406    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:27:21.818419    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.818554    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.818654    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818747    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.818841    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.818981    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.819131    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.819139    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:27:21.870784    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:27:21.870826    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:27:21.870831    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:27:21.870837    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.870976    4789 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:27:21.870986    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.871077    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.871169    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.871272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871352    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.871452    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.871577    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.871711    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.871719    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:27:21.937676    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:27:21.937694    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.937826    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:21.937927    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938017    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:21.938112    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:21.938245    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:21.938391    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:21.938402    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:27:21.996654    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:27:21.996676    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:27:21.996692    4789 buildroot.go:174] setting up certificates
	I0819 10:27:21.996701    4789 provision.go:84] configureAuth start
	I0819 10:27:21.996714    4789 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:27:21.996873    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:21.996990    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:21.997094    4789 provision.go:143] copyHostCerts
	I0819 10:27:21.997133    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997201    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:27:21.997209    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:27:21.997337    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:27:21.997534    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997567    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:27:21.997572    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:27:21.997714    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:27:21.997882    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.997926    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:27:21.997941    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:27:21.998049    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:27:21.998203    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:27:22.044837    4789 provision.go:177] copyRemoteCerts
	I0819 10:27:22.044896    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:27:22.044908    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.045021    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.045107    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.045191    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.045288    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:22.078701    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:27:22.078779    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:27:22.098027    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:27:22.098092    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 10:27:22.117169    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:27:22.117235    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:27:22.137411    4789 provision.go:87] duration metric: took 140.68689ms to configureAuth
	I0819 10:27:22.137424    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:27:22.137558    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:22.137574    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:22.137700    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.137783    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.137859    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.137942    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.138028    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.138134    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.138266    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.138274    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:27:22.191384    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:27:22.191397    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:27:22.191469    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:27:22.191481    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.191636    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.191724    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191834    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.191924    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.192051    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.192193    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.192236    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:27:22.256138    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:27:22.256165    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:22.256301    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:22.256391    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256475    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:22.256578    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:22.256695    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:22.256839    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:22.256851    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:27:23.816844    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:27:23.816860    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:27:23.816871    4789 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0819 10:27:23.817008    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:27:23.817016    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:27:23.817020    4789 client.go:171] duration metric: took 13.841219093s to LocalClient.Create
	I0819 10:27:23.817036    4789 start.go:167] duration metric: took 13.84126124s to libmachine.API.Create "ha-431000"
	I0819 10:27:23.817044    4789 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:27:23.817051    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:27:23.817063    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.817219    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:27:23.817232    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.817321    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.817402    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.817497    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.817595    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.852993    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:27:23.857771    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:27:23.857792    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:27:23.857909    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:27:23.858094    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:27:23.858100    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:27:23.858323    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:27:23.868639    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:23.894485    4789 start.go:296] duration metric: took 77.430316ms for postStartSetup
	I0819 10:27:23.894509    4789 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:27:23.895099    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.895256    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:23.895585    4789 start.go:128] duration metric: took 13.953185373s to createHost
	I0819 10:27:23.895598    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.895691    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.895790    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895879    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.895966    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.896069    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:27:23.896228    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:27:23.896236    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:27:23.956133    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088443.744394113
	
	I0819 10:27:23.956145    4789 fix.go:216] guest clock: 1724088443.744394113
	I0819 10:27:23.956151    4789 fix.go:229] Guest: 2024-08-19 10:27:23.744394113 -0700 PDT Remote: 2024-08-19 10:27:23.895593 -0700 PDT m=+14.491162031 (delta=-151.198887ms)
	I0819 10:27:23.956169    4789 fix.go:200] guest clock delta is within tolerance: -151.198887ms
	I0819 10:27:23.956173    4789 start.go:83] releasing machines lock for "ha-431000", held for 14.013893151s
	I0819 10:27:23.956192    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956322    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:23.956416    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956749    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956860    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:23.956951    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:27:23.956980    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957023    4789 ssh_runner.go:195] Run: cat /version.json
	I0819 10:27:23.957036    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:23.957073    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957109    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:23.957170    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957184    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:23.957272    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957292    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:23.957350    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:23.957384    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:24.032926    4789 ssh_runner.go:195] Run: systemctl --version
	I0819 10:27:24.037723    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:27:24.041939    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:27:24.041985    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:27:24.055424    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:27:24.055435    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.055529    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.070257    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:27:24.079169    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:27:24.088264    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.088319    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:27:24.097172    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.105902    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:27:24.114585    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:27:24.123406    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:27:24.132626    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:27:24.141378    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:27:24.150490    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:27:24.158980    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:27:24.167068    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:27:24.175030    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.269460    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:27:24.289328    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:27:24.289405    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:27:24.304907    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.317291    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:27:24.330289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:27:24.340851    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.351456    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:27:24.376914    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:27:24.387402    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:27:24.402522    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:27:24.405426    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:27:24.412799    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:27:24.426019    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:27:24.528550    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:27:24.636829    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:27:24.636893    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:27:24.652027    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:24.753641    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:27.037286    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.283575266s)
	I0819 10:27:27.037346    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:27:27.047775    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:27:27.062961    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.074027    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:27:27.172330    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:27:27.284593    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.395779    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:27:27.409552    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:27:27.420868    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:27.532356    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:27:27.591558    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:27:27.591636    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:27:27.595967    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:27:27.596013    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:27:27.599275    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:27:27.625101    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:27:27.625173    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.642636    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:27:27.693299    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:27:27.693355    4789 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:27:27.693783    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:27:27.698129    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:27.708916    4789 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:27:27.708982    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:27.709038    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:27.721971    4789 docker.go:685] Got preloaded images: 
	I0819 10:27:27.721984    4789 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0819 10:27:27.722034    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:27.730353    4789 ssh_runner.go:195] Run: which lz4
	I0819 10:27:27.733218    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 10:27:27.733323    4789 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 10:27:27.736425    4789 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 10:27:27.736445    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0819 10:27:28.750864    4789 docker.go:649] duration metric: took 1.017557348s to copy over tarball
	I0819 10:27:28.750956    4789 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 10:27:31.074672    4789 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.323648699s)
	I0819 10:27:31.074688    4789 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 10:27:31.100633    4789 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 10:27:31.109680    4789 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0819 10:27:31.123335    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:31.234501    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:27:33.578614    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.344043512s)
	I0819 10:27:33.578701    4789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:27:33.592021    4789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 10:27:33.592040    4789 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:27:33.592048    4789 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:27:33.592132    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:27:33.592198    4789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:27:33.629283    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:33.629295    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:33.629309    4789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:27:33.629329    4789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:27:33.629424    4789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:27:33.629439    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:27:33.629491    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:27:33.642904    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:27:33.642969    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:27:33.643018    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:27:33.652008    4789 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:27:33.652070    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:27:33.660066    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:27:33.673571    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:27:33.686700    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:27:33.700085    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0819 10:27:33.713804    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:27:33.716661    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:27:33.726684    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:27:33.822205    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:27:33.836833    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:27:33.836844    4789 certs.go:194] generating shared ca certs ...
	I0819 10:27:33.836855    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.837051    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:27:33.837132    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:27:33.837142    4789 certs.go:256] generating profile certs ...
	I0819 10:27:33.837189    4789 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:27:33.837203    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0819 10:27:33.888319    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt ...
	I0819 10:27:33.888333    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt: {Name:mk2ecc34873277fbe11bf267ec0d97684e18e84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888666    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key ...
	I0819 10:27:33.888675    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key: {Name:mk51abee214c838f4621902241303fe73ba93aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:33.888900    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e
	I0819 10:27:33.888915    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I0819 10:27:34.060027    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e ...
	I0819 10:27:34.060046    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e: {Name:mk108eb9cf88ab2aae15883e4a3724751adb3118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060347    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e ...
	I0819 10:27:34.060356    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e: {Name:mk8fae11cce9c9a45d3e151953d1ee9ab2cc82d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.060557    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:27:34.060759    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.1e882e9e -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:27:34.060929    4789 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:27:34.060943    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0819 10:27:34.243675    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0819 10:27:34.243690    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mkeb1eac7ee8b3901067565b7ff883710f2d1088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244061    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key ...
	I0819 10:27:34.244069    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key: {Name:mkc1afcd7a6a9a572716155e33c32e7def81650b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:34.244312    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:27:34.244340    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:27:34.244378    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:27:34.244398    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:27:34.244416    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:27:34.244448    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:27:34.244486    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:27:34.244521    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:27:34.244615    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:27:34.244666    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:27:34.244675    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:27:34.244748    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:27:34.244776    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:27:34.244831    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:27:34.244909    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:27:34.244942    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.244990    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.245007    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.245522    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:27:34.267677    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:27:34.287348    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:27:34.309971    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:27:34.330910    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 10:27:34.350036    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:27:34.370663    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:27:34.390457    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:27:34.410226    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:27:34.431025    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:27:34.451232    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:27:34.471133    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:27:34.487758    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:27:34.493769    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:27:34.506308    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511941    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.511996    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:27:34.519851    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:27:34.531120    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:27:34.540803    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544302    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.544341    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:27:34.548724    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:27:34.558817    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:27:34.568088    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571692    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.571731    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:27:34.575999    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:27:34.585057    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:27:34.588207    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:27:34.588251    4789 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:27:34.588345    4789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:27:34.601241    4789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:27:34.609838    4789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 10:27:34.618794    4789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 10:27:34.627200    4789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 10:27:34.627208    4789 kubeadm.go:157] found existing configuration files:
	
	I0819 10:27:34.627243    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 10:27:34.635162    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 10:27:34.635198    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 10:27:34.643336    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 10:27:34.651247    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 10:27:34.651280    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 10:27:34.659346    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.667240    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 10:27:34.667281    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 10:27:34.675386    4789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 10:27:34.684053    4789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 10:27:34.684105    4789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 10:27:34.692357    4789 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 10:27:34.751991    4789 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 10:27:34.752160    4789 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 10:27:34.833970    4789 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 10:27:34.834062    4789 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 10:27:34.834153    4789 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 10:27:34.842513    4789 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 10:27:34.863067    4789 out.go:235]   - Generating certificates and keys ...
	I0819 10:27:34.863126    4789 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 10:27:34.863179    4789 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 10:27:35.003012    4789 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 10:27:35.766829    4789 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 10:27:35.976153    4789 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 10:27:36.134850    4789 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 10:27:36.228947    4789 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 10:27:36.229166    4789 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.375842    4789 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 10:27:36.375934    4789 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0819 10:27:36.597289    4789 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 10:27:36.907219    4789 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 10:27:37.426404    4789 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 10:27:37.426585    4789 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 10:27:37.566387    4789 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 10:27:38.000620    4789 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 10:27:38.121335    4789 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 10:27:38.179042    4789 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 10:27:38.231270    4789 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 10:27:38.231752    4789 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 10:27:38.233818    4789 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 10:27:38.255454    4789 out.go:235]   - Booting up control plane ...
	I0819 10:27:38.255535    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 10:27:38.255605    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 10:27:38.255655    4789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 10:27:38.255734    4789 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 10:27:38.255809    4789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 10:27:38.255842    4789 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 10:27:38.364951    4789 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 10:27:38.365069    4789 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 10:27:39.366309    4789 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001984632s
	I0819 10:27:39.366388    4789 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 10:27:45.029099    4789 kubeadm.go:310] [api-check] The API server is healthy after 5.666724975s
	I0819 10:27:45.039440    4789 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 10:27:45.046481    4789 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 10:27:45.059797    4789 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 10:27:45.059959    4789 kubeadm.go:310] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 10:27:45.067482    4789 kubeadm.go:310] [bootstrap-token] Using token: rrr6yu.ivgebthw63l7ehzv
	I0819 10:27:45.106820    4789 out.go:235]   - Configuring RBAC rules ...
	I0819 10:27:45.107004    4789 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 10:27:45.110638    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 10:27:45.151902    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 10:27:45.154406    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 10:27:45.156223    4789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 10:27:45.158190    4789 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 10:27:45.434935    4789 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 10:27:45.846068    4789 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 10:27:46.434136    4789 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 10:27:46.434675    4789 kubeadm.go:310] 
	I0819 10:27:46.434724    4789 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 10:27:46.434728    4789 kubeadm.go:310] 
	I0819 10:27:46.434798    4789 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 10:27:46.434808    4789 kubeadm.go:310] 
	I0819 10:27:46.434829    4789 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 10:27:46.434881    4789 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 10:27:46.434925    4789 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 10:27:46.434930    4789 kubeadm.go:310] 
	I0819 10:27:46.434974    4789 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 10:27:46.434984    4789 kubeadm.go:310] 
	I0819 10:27:46.435035    4789 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 10:27:46.435041    4789 kubeadm.go:310] 
	I0819 10:27:46.435080    4789 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 10:27:46.435139    4789 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 10:27:46.435197    4789 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 10:27:46.435204    4789 kubeadm.go:310] 
	I0819 10:27:46.435268    4789 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 10:27:46.435333    4789 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 10:27:46.435337    4789 kubeadm.go:310] 
	I0819 10:27:46.435410    4789 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435498    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd \
	I0819 10:27:46.435515    4789 kubeadm.go:310] 	--control-plane 
	I0819 10:27:46.435520    4789 kubeadm.go:310] 
	I0819 10:27:46.435589    4789 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 10:27:46.435594    4789 kubeadm.go:310] 
	I0819 10:27:46.435664    4789 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rrr6yu.ivgebthw63l7ehzv \
	I0819 10:27:46.435746    4789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd 
	I0819 10:27:46.435997    4789 kubeadm.go:310] W0819 17:27:34.545490    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436229    4789 kubeadm.go:310] W0819 17:27:34.546600    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 10:27:46.436316    4789 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 10:27:46.436331    4789 cni.go:84] Creating CNI manager for ""
	I0819 10:27:46.436337    4789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 10:27:46.458203    4789 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 10:27:46.517773    4789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 10:27:46.523858    4789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 10:27:46.523872    4789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 10:27:46.539513    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 10:27:46.759807    4789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 10:27:46.759878    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:46.759883    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0819 10:27:46.777623    4789 ops.go:34] apiserver oom_adj: -16
	I0819 10:27:46.926523    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.427175    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:47.927281    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.428033    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:48.926686    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.426608    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:49.926666    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 10:27:50.010199    4789 kubeadm.go:1113] duration metric: took 3.25030545s to wait for elevateKubeSystemPrivileges
	I0819 10:27:50.010216    4789 kubeadm.go:394] duration metric: took 15.42163041s to StartCluster
	I0819 10:27:50.010227    4789 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.010325    4789 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.010762    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:27:50.011021    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 10:27:50.011033    4789 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.011050    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:27:50.011076    4789 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:27:50.011116    4789 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0819 10:27:50.011120    4789 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0819 10:27:50.011148    4789 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0819 10:27:50.011152    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.011155    4789 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0819 10:27:50.011186    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.011415    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011420    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.011430    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.011431    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.020667    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51134
	I0819 10:27:50.021171    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021230    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51136
	I0819 10:27:50.021523    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021533    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.021634    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.021753    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.021940    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.021953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.022115    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.022146    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.022229    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.022806    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.022988    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.023051    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.024924    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:27:50.025156    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:27:50.025529    4789 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:27:50.025699    4789 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0819 10:27:50.025720    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:27:50.025937    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.025963    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.031229    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51138
	I0819 10:27:50.031604    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.031942    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.031953    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.032154    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.032270    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.032358    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.032435    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.033436    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.034958    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51140
	I0819 10:27:50.035269    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.035586    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.035596    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.035796    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.036148    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.036165    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.044937    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51142
	I0819 10:27:50.045312    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.045667    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.045680    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.045893    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.045996    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:27:50.046077    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.046151    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:27:50.047102    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:27:50.047225    4789 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.047234    4789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 10:27:50.047243    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.047325    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.047417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.047495    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.047571    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.056055    4789 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 10:27:50.076134    4789 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.076146    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 10:27:50.076163    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:27:50.076310    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:27:50.076417    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:27:50.076556    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:27:50.076664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:27:50.113554    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 10:27:50.127003    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 10:27:50.262022    4789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 10:27:50.488277    4789 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0819 10:27:50.488318    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488327    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488534    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488547    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488556    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.488563    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.488564    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488681    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.488704    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.488718    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.488767    4789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:27:50.488780    4789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:27:50.488862    4789 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 10:27:50.488867    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.488877    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.488882    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495057    4789 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:27:50.495477    4789 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 10:27:50.495484    4789 round_trippers.go:469] Request Headers:
	I0819 10:27:50.495490    4789 round_trippers.go:473]     Content-Type: application/json
	I0819 10:27:50.495494    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:27:50.495496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:27:50.498504    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:27:50.498632    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.498641    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.498797    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.498806    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.498814    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649595    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649607    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.649833    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.649843    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.649848    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.649874    4789 main.go:141] libmachine: Making call to close driver server
	I0819 10:27:50.649893    4789 main.go:141] libmachine: (ha-431000) Calling .Close
	I0819 10:27:50.650019    4789 main.go:141] libmachine: Successfully made call to close driver server
	I0819 10:27:50.650028    4789 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 10:27:50.650044    4789 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0819 10:27:50.673040    4789 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 10:27:50.709732    4789 addons.go:510] duration metric: took 698.654107ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 10:27:50.709774    4789 start.go:246] waiting for cluster config update ...
	I0819 10:27:50.709799    4789 start.go:255] writing updated cluster config ...
	I0819 10:27:50.746763    4789 out.go:201] 
	I0819 10:27:50.768467    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:27:50.768565    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.790908    4789 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:27:50.832651    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:27:50.832673    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:27:50.832790    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:27:50.832801    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:27:50.832852    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:27:50.833261    4789 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:27:50.833314    4789 start.go:364] duration metric: took 41.162µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:27:50.833329    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:27:50.833382    4789 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0819 10:27:50.854688    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:27:50.854833    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:27:50.854870    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:27:50.864309    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51147
	I0819 10:27:50.864640    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:27:50.864951    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:27:50.864963    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:27:50.865175    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:27:50.865294    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:27:50.865374    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:27:50.865472    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:27:50.865485    4789 client.go:168] LocalClient.Create starting
	I0819 10:27:50.865515    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:27:50.865553    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865565    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865607    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:27:50.865634    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:27:50.865649    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:27:50.865666    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:27:50.865676    4789 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0819 10:27:50.865754    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.865776    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:27:50.891966    4789 main.go:141] libmachine: Creating machine...
	I0819 10:27:50.891987    4789 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0819 10:27:50.892145    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:50.892330    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:50.892137    4845 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:27:50.892421    4789 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:27:51.078705    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.078630    4845 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa...
	I0819 10:27:51.171843    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.171751    4845 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0819 10:27:51.171860    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0819 10:27:51.171868    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0819 10:27:51.172685    4789 main.go:141] libmachine: (ha-431000-m02) DBG | I0819 10:27:51.172591    4845 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02 ...
	I0819 10:27:51.544884    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.544910    4789 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:27:51.544922    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:27:51.571631    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:27:51.571653    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:27:51.571680    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571706    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:27:51.571739    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:27:51.571767    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:27:51.571780    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:27:51.574668    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 DEBUG: hyperkit: Pid is 4850
	I0819 10:27:51.575734    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:27:51.575757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:51.575783    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:51.576702    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:51.576759    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:51.576778    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:51.576816    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:51.576830    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:51.576844    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:51.582262    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:27:51.590515    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:27:51.591362    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:51.591388    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:51.591397    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:51.591407    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:51.978930    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:27:51.978947    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:27:52.094059    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:27:52.094091    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:27:52.094127    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:27:52.094142    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:27:52.094869    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:27:52.094879    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:27:53.577521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0819 10:27:53.577541    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:53.577636    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:53.578446    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:53.578461    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:53.578472    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:53.578481    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:53.578489    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:53.578507    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:55.579485    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0819 10:27:55.579501    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:55.579576    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:55.580358    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:55.580387    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:55.580414    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:55.580426    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:55.580434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:55.580442    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.581588    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0819 10:27:57.581603    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:57.581681    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:57.582486    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:57.582510    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:57.582521    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:57.582530    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:57.582540    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:57.582548    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:27:57.680321    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:27:57.680434    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:27:57.680445    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:27:57.704982    4789 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:27:57 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:27:59.583757    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0819 10:27:59.583772    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:27:59.583842    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:27:59.584652    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:27:59.584696    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0819 10:27:59.584710    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:27:59.584720    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:27:59.584729    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:27:59.584737    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:28:01.585137    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0819 10:28:01.585154    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.585235    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.585996    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:28:01.586042    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:28:01.586055    4789 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:28:01.586080    4789 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:28:01.586086    4789 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:28:01.586098    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:01.586694    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586794    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:01.586889    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:28:01.586896    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:28:01.586980    4789 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:01.587029    4789 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 4850
	I0819 10:28:01.587790    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:28:01.587796    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:28:01.587800    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:28:01.587804    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:01.587881    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:01.587956    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588060    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:01.588138    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:01.588256    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:01.588435    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:01.588443    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:28:02.645180    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.645193    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:28:02.645198    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.645326    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.645422    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645501    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.645583    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.645718    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.645869    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.645877    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:28:02.700961    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:28:02.700992    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:28:02.700998    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:28:02.701003    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701132    4789 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:28:02.701143    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.701237    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.701327    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.701424    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701502    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.701588    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.701720    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.701855    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.701864    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:28:02.773500    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:28:02.773515    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.773649    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.773737    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773840    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.773945    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.774071    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.774226    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.774237    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:28:02.838956    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:28:02.838971    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:28:02.838984    4789 buildroot.go:174] setting up certificates
	I0819 10:28:02.838992    4789 provision.go:84] configureAuth start
	I0819 10:28:02.838998    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:28:02.839135    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:02.839223    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.839322    4789 provision.go:143] copyHostCerts
	I0819 10:28:02.839347    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839393    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:28:02.839399    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:28:02.839532    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:28:02.839738    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839769    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:28:02.839774    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:28:02.839845    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:28:02.839992    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840021    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:28:02.840025    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:28:02.840090    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:28:02.840244    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:28:02.878856    4789 provision.go:177] copyRemoteCerts
	I0819 10:28:02.878899    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:28:02.878912    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.879041    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.879132    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.879231    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.879330    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:02.914748    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:28:02.914819    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:28:02.934608    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:28:02.934673    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:28:02.954833    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:28:02.954900    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:28:02.974652    4789 provision.go:87] duration metric: took 135.649275ms to configureAuth
	I0819 10:28:02.974666    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:28:02.974809    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:02.974823    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:02.974958    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:02.975063    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:02.975147    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975219    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:02.975328    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:02.975454    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:02.975601    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:02.975609    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:28:03.033628    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:28:03.033639    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:28:03.033715    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:28:03.033730    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.033861    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.033950    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034053    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.034140    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.034264    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.034412    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.034459    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:28:03.102644    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:28:03.102663    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:03.102811    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:03.102898    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.102999    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:03.103120    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:03.103244    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:03.103390    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:03.103404    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:28:04.637367    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:28:04.637381    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:28:04.637388    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0819 10:28:04.637524    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:28:04.637530    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:28:04.637534    4789 client.go:171] duration metric: took 13.771742286s to LocalClient.Create
	I0819 10:28:04.637544    4789 start.go:167] duration metric: took 13.771771513s to libmachine.API.Create "ha-431000"
	I0819 10:28:04.637550    4789 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:28:04.637557    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:28:04.637566    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.637712    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:28:04.637723    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.637834    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.637926    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.638026    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.638127    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.678475    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:28:04.682965    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:28:04.682980    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:28:04.683079    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:28:04.683246    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:28:04.683253    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:28:04.683434    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:28:04.695086    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:04.723279    4789 start.go:296] duration metric: took 85.720185ms for postStartSetup
	I0819 10:28:04.723311    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:28:04.723943    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.724123    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:28:04.724446    4789 start.go:128] duration metric: took 13.890752069s to createHost
	I0819 10:28:04.724460    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.724558    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.724679    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724786    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.724871    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.724979    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:28:04.725097    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:28:04.725103    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:28:04.784682    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088484.852271103
	
	I0819 10:28:04.784694    4789 fix.go:216] guest clock: 1724088484.852271103
	I0819 10:28:04.784698    4789 fix.go:229] Guest: 2024-08-19 10:28:04.852271103 -0700 PDT Remote: 2024-08-19 10:28:04.724454 -0700 PDT m=+55.319126445 (delta=127.817103ms)
	I0819 10:28:04.784725    4789 fix.go:200] guest clock delta is within tolerance: 127.817103ms
	I0819 10:28:04.784731    4789 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.951104834s
	I0819 10:28:04.784750    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.784884    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:04.807240    4789 out.go:177] * Found network options:
	I0819 10:28:04.829600    4789 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:28:04.851548    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.851607    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852495    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852747    4789 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:28:04.852876    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:28:04.852915    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:28:04.852962    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:28:04.853080    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:28:04.853100    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:28:04.853127    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853372    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853394    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:28:04.853596    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853633    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:28:04.853742    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:28:04.853804    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:28:04.853880    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:28:04.886788    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:28:04.886847    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:28:04.931189    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:28:04.931209    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:04.931315    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:04.947443    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:28:04.955693    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:28:04.964155    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:28:04.964197    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:28:04.972493    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.980548    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:28:04.988709    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:28:04.996856    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:28:05.005271    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:28:05.013575    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:28:05.021801    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:28:05.030285    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:28:05.037842    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:28:05.045332    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.140730    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:28:05.159555    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:28:05.159625    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:28:05.177222    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.189624    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:28:05.203743    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:28:05.214606    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.224836    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:28:05.249649    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:28:05.261132    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:28:05.276191    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:28:05.279129    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:28:05.287175    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:28:05.300748    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:28:05.396444    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:28:05.505778    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:28:05.505805    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:28:05.520914    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:05.616215    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:28:07.911303    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.295016426s)
	I0819 10:28:07.911366    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:28:07.923467    4789 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:28:07.938312    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:07.949283    4789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:28:08.046922    4789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:28:08.152880    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.256594    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:28:08.271339    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:28:08.283089    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:08.384798    4789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:28:08.441813    4789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:28:08.441881    4789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:28:08.446421    4789 start.go:563] Will wait 60s for crictl version
	I0819 10:28:08.446473    4789 ssh_runner.go:195] Run: which crictl
	I0819 10:28:08.449807    4789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:28:08.479621    4789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:28:08.479690    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.496571    4789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:28:08.537488    4789 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:28:08.579078    4789 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:28:08.603340    4789 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:28:08.603786    4789 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:28:08.608372    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:08.618166    4789 mustload.go:65] Loading cluster: ha-431000
	I0819 10:28:08.618314    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:08.618533    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.618549    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.627122    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51170
	I0819 10:28:08.627459    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.627845    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.627857    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.628097    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.628239    4789 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:28:08.628342    4789 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:28:08.628430    4789 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:28:08.629353    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:08.629592    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:08.629608    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:08.638041    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51172
	I0819 10:28:08.638388    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:08.638753    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:08.638770    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:08.638992    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:08.639108    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:08.639209    4789 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:28:08.639216    4789 certs.go:194] generating shared ca certs ...
	I0819 10:28:08.639225    4789 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.639357    4789 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:28:08.639425    4789 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:28:08.639434    4789 certs.go:256] generating profile certs ...
	I0819 10:28:08.639538    4789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:28:08.639562    4789 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788
	I0819 10:28:08.639575    4789 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0819 10:28:08.693749    4789 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 ...
	I0819 10:28:08.693766    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788: {Name:mkade16cb35e521e9e55fc42d7cb129c8b94b782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694149    4789 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 ...
	I0819 10:28:08.694160    4789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788: {Name:mkeae0a28d48da45f84299952289f15db5f944f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:28:08.694378    4789 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:28:08.694703    4789 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.2ad85788 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:28:08.694954    4789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:28:08.694964    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:28:08.694987    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:28:08.695006    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:28:08.695024    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:28:08.695042    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:28:08.695060    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:28:08.695078    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:28:08.695096    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:28:08.695175    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:28:08.695213    4789 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:28:08.695228    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:28:08.695261    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:28:08.695290    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:28:08.695321    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:28:08.695400    4789 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:28:08.695438    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:28:08.695462    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:28:08.695482    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:08.695511    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:08.695664    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:08.695745    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:08.695845    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:08.695925    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:08.729193    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:28:08.736181    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:28:08.748665    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:28:08.751826    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:28:08.773481    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:28:08.777252    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:28:08.787546    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:28:08.791015    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:28:08.800105    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:28:08.803218    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:28:08.812240    4789 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:28:08.815351    4789 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:28:08.824083    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:28:08.844052    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:28:08.864107    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:28:08.884612    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:28:08.904284    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 10:28:08.924397    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 10:28:08.944026    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:28:08.964689    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:28:08.984934    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:28:09.004413    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:28:09.024043    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:28:09.043924    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:28:09.058066    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:28:09.071585    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:28:09.085080    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:28:09.098536    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:28:09.112048    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:28:09.125242    4789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:28:09.139717    4789 ssh_runner.go:195] Run: openssl version
	I0819 10:28:09.144032    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:28:09.152602    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.155967    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.156009    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:28:09.160192    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:28:09.168568    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:28:09.176997    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180533    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.180568    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:28:09.184799    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:28:09.193356    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:28:09.201811    4789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205453    4789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.205494    4789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:28:09.209760    4789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:28:09.218392    4789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:28:09.222392    4789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:28:09.222437    4789 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:28:09.222498    4789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:28:09.222516    4789 kube-vip.go:115] generating kube-vip config ...
	I0819 10:28:09.222559    4789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:28:09.234408    4789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:28:09.234452    4789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:28:09.234506    4789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.242939    4789 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:28:09.242994    4789 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 10:28:09.251331    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl
	I0819 10:28:09.251336    4789 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 10:28:11.797289    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:28:11.809069    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.809192    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:28:11.812267    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:28:11.812291    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:28:12.469259    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.469340    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:28:12.472845    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:28:12.472869    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:28:13.348737    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.348820    4789 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:28:13.352429    4789 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:28:13.352449    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:28:13.542994    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:28:13.550937    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:28:13.564187    4789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:28:13.577654    4789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:28:13.591433    4789 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:28:13.594347    4789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:28:13.604347    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:13.710422    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:13.730131    4789 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:28:13.730407    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:28:13.730448    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:28:13.739474    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51199
	I0819 10:28:13.739816    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:28:13.740174    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:28:13.740190    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:28:13.740438    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:28:13.740564    4789 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:28:13.740661    4789 start.go:317] joinCluster: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:28:13.740750    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 10:28:13.740767    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:28:13.740857    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:28:13.740939    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:28:13.741027    4789 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:28:13.741101    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:28:13.815525    4789 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:13.815563    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0819 10:28:41.108330    4789 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lotd37.s20z2cg4jehblgbq --discovery-token-ca-cert-hash sha256:ec43ca3cf90fc65d20fe03b158fc58693d0656f86278aa97a4f9bfad2a4d06cd --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-431000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (27.292143754s)
	I0819 10:28:41.108351    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 10:28:41.504714    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000-m02 minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=false
	I0819 10:28:41.585348    4789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-431000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 10:28:41.693283    4789 start.go:319] duration metric: took 27.951997328s to joinCluster
	I0819 10:28:41.693326    4789 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:28:41.693537    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:28:41.715528    4789 out.go:177] * Verifying Kubernetes components...
	I0819 10:28:41.790354    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:28:41.995139    4789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:28:42.017369    4789 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:28:42.017608    4789 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1243a2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:28:42.017650    4789 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:28:42.017827    4789 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:42.017919    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.017925    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.017930    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.017935    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.025432    4789 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 10:28:42.518902    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:42.518917    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:42.518923    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:42.518927    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:42.521742    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:43.018396    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.018411    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.018417    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.018421    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.021454    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:43.518031    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:43.518083    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:43.518106    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:43.518116    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:43.522999    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:44.018193    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.018219    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.018231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.018237    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.021854    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:44.022387    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:44.518152    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:44.518189    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:44.518196    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:44.518199    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:44.520027    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.019772    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.019792    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.019799    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.019803    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.021628    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:45.518039    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:45.518053    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:45.518059    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:45.518064    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:45.520113    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:46.018198    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.018232    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.018239    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.018243    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.020136    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.518474    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:46.518490    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:46.518496    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:46.518499    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:46.520505    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:46.520916    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:47.019124    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.019150    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.019162    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.022729    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:47.518316    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:47.518341    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:47.518351    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:47.518356    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:47.520471    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:48.019594    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.019620    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.019630    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.019636    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.023447    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:48.518492    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:48.518526    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:48.518583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:48.518593    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:48.523421    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:48.523787    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:49.019217    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.019242    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.019254    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.019260    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.022862    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:49.520299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:49.520324    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:49.520337    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:49.520342    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:49.523532    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.019383    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.019412    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.019424    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.019430    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.022847    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:50.519489    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:50.519503    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:50.519511    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:50.519515    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:50.522131    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:51.019130    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.019153    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.019163    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.019168    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.022497    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:51.022894    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:51.518391    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:51.518448    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:51.518465    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:51.518476    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:51.521848    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.019014    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.019045    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.019103    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.019117    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.022339    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:52.519630    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:52.519644    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:52.519651    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:52.519655    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:52.522019    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.018435    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.018460    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.018472    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.018480    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.021850    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:53.518299    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:53.518340    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:53.518349    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:53.518355    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:53.520795    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:53.521268    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:54.020380    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.020406    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.020418    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.020423    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.024178    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:54.519346    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:54.519364    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:54.519383    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:54.519387    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:54.521155    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:55.020400    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.020425    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.020437    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.020444    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.024326    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:55.519229    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:55.519245    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:55.519264    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:55.519268    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:55.521435    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:55.521852    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:56.019678    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.019703    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.019714    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.019719    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.023317    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:56.518539    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:56.518563    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:56.518576    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:56.518581    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:56.521781    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.020424    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.020449    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.020460    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.020465    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.024114    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.519399    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:57.519428    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:57.519468    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:57.519475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:57.522788    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:57.523223    4789 node_ready.go:53] node "ha-431000-m02" has status "Ready":"False"
	I0819 10:28:58.018734    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.018759    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.018770    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.018777    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.022242    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.518348    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.518359    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.518371    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.518375    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.522907    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.523168    4789 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:28:58.523182    4789 node_ready.go:38] duration metric: took 16.504973252s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:28:58.523189    4789 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:28:58.523237    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:28:58.523243    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.523249    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.523253    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.528083    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:28:58.532699    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.532761    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:28:58.532768    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.532774    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.532776    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.535978    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.536344    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.536351    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.536358    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.536361    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.538061    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.538368    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.538377    4789 pod_ready.go:82] duration metric: took 5.660556ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538383    4789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.538413    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:28:58.538417    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.538423    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.538428    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.540013    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.540457    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.540465    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.540471    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.540475    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.542120    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.542393    4789 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.542400    4789 pod_ready.go:82] duration metric: took 4.011453ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542406    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.542439    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:28:58.542444    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.542449    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.542454    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.543986    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.544340    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.544347    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.544353    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.544356    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.545868    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.546173    4789 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.546181    4789 pod_ready.go:82] duration metric: took 3.769725ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546187    4789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.546221    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:28:58.546226    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.546231    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.546234    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.547638    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.548110    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:58.548118    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.548123    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.548127    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.549514    4789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:28:58.549853    4789 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.549860    4789 pod_ready.go:82] duration metric: took 3.668598ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.549868    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.718822    4789 request.go:632] Waited for 168.888912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718861    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:28:58.718867    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.718872    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.718876    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.721032    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:58.919673    4789 request.go:632] Waited for 198.011193ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919731    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:58.919740    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:58.919750    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:58.919807    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:58.923236    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:58.923670    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:58.923682    4789 pod_ready.go:82] duration metric: took 373.799986ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:58.923691    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.119399    4789 request.go:632] Waited for 195.629207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119559    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:28:59.119572    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.119583    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.119589    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.122804    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.318619    4789 request.go:632] Waited for 195.030736ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318674    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:28:59.318695    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.318702    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.318705    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.320812    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.321165    4789 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.321173    4789 pod_ready.go:82] duration metric: took 397.466691ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.321180    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.520541    4789 request.go:632] Waited for 199.292765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520642    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:28:59.520652    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.520663    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.520672    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.524463    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:28:59.718728    4789 request.go:632] Waited for 192.615056ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718803    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:28:59.718811    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.718818    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.718823    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.720955    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:28:59.721397    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:28:59.721407    4789 pod_ready.go:82] duration metric: took 400.213219ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.721415    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:28:59.918907    4789 request.go:632] Waited for 197.434904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919004    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:28:59.919014    4789 round_trippers.go:469] Request Headers:
	I0819 10:28:59.919024    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:28:59.919030    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:28:59.922451    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.119192    4789 request.go:632] Waited for 196.220574ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119263    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.119272    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.119286    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.119297    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.122630    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.122957    4789 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.122968    4789 pod_ready.go:82] duration metric: took 401.538458ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.122977    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.320524    4789 request.go:632] Waited for 197.475989ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320660    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:29:00.320672    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.320681    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.320689    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.323985    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.519403    4789 request.go:632] Waited for 194.628597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519535    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:00.519546    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.519560    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.519568    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.523121    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.523435    4789 pod_ready.go:93] pod "kube-proxy-5h7j2" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.523449    4789 pod_ready.go:82] duration metric: took 400.456993ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.523457    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.718666    4789 request.go:632] Waited for 195.15054ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718742    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:29:00.718752    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.718786    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.718800    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.721920    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.918782    4789 request.go:632] Waited for 196.40919ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918873    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:00.918882    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:00.918896    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:00.918906    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:00.922355    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:00.922815    4789 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:00.922824    4789 pod_ready.go:82] duration metric: took 399.351509ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:00.922830    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.118854    4789 request.go:632] Waited for 195.977175ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118950    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:29:01.118965    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.118981    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.118987    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.122683    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.318886    4789 request.go:632] Waited for 195.887859ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319029    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:29:01.319042    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.319053    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.319063    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.322689    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.323187    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.323200    4789 pod_ready.go:82] duration metric: took 400.355182ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.323208    4789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.518928    4789 request.go:632] Waited for 195.662505ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519043    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:29:01.519057    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.519070    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.519077    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.522736    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:01.718819    4789 request.go:632] Waited for 195.65197ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718885    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:29:01.718891    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.718899    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.718905    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.721246    4789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:29:01.721682    4789 pod_ready.go:93] pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:29:01.721691    4789 pod_ready.go:82] duration metric: took 398.467113ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:29:01.721701    4789 pod_ready.go:39] duration metric: took 3.198431164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:29:01.721718    4789 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:29:01.721774    4789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:29:01.735634    4789 api_server.go:72] duration metric: took 20.041851081s to wait for apiserver process to appear ...
	I0819 10:29:01.735647    4789 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:29:01.735663    4789 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:29:01.738815    4789 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:29:01.738848    4789 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:29:01.738854    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.738860    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.738864    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.739526    4789 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:29:01.739580    4789 api_server.go:141] control plane version: v1.31.0
	I0819 10:29:01.739589    4789 api_server.go:131] duration metric: took 3.937962ms to wait for apiserver health ...
	I0819 10:29:01.739594    4789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:29:01.918638    4789 request.go:632] Waited for 178.995687ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918733    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:01.918745    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:01.918757    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:01.918762    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:01.922864    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:01.926606    4789 system_pods.go:59] 17 kube-system pods found
	I0819 10:29:01.926628    4789 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:01.926633    4789 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:01.926636    4789 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:01.926640    4789 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:01.926642    4789 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:01.926645    4789 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:01.926647    4789 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:01.926650    4789 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:01.926653    4789 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:01.926656    4789 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:01.926659    4789 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:01.926666    4789 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:01.926669    4789 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:01.926672    4789 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:01.926674    4789 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:01.926677    4789 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:01.926680    4789 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:01.926684    4789 system_pods.go:74] duration metric: took 187.080965ms to wait for pod list to return data ...
	I0819 10:29:01.926689    4789 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:29:02.119406    4789 request.go:632] Waited for 192.625822ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119507    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:29:02.119517    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.119528    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.119535    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.123120    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.123283    4789 default_sa.go:45] found service account: "default"
	I0819 10:29:02.123293    4789 default_sa.go:55] duration metric: took 196.595366ms for default service account to be created ...
	I0819 10:29:02.123300    4789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:29:02.319795    4789 request.go:632] Waited for 196.43255ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319928    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:29:02.319939    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.319947    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.319954    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.324586    4789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:29:02.328058    4789 system_pods.go:86] 17 kube-system pods found
	I0819 10:29:02.328071    4789 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:29:02.328075    4789 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:29:02.328078    4789 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:29:02.328083    4789 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:29:02.328086    4789 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:29:02.328088    4789 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:29:02.328091    4789 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:29:02.328093    4789 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:29:02.328096    4789 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:29:02.328098    4789 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:29:02.328101    4789 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:29:02.328103    4789 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:29:02.328106    4789 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:29:02.328109    4789 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:29:02.328112    4789 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:29:02.328115    4789 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:29:02.328117    4789 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:29:02.328122    4789 system_pods.go:126] duration metric: took 204.813151ms to wait for k8s-apps to be running ...
	I0819 10:29:02.328133    4789 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:29:02.328183    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:29:02.340002    4789 system_svc.go:56] duration metric: took 11.865981ms WaitForService to wait for kubelet
	I0819 10:29:02.340017    4789 kubeadm.go:582] duration metric: took 20.646222268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:29:02.340034    4789 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:29:02.518831    4789 request.go:632] Waited for 178.726274ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518969    4789 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:29:02.518980    4789 round_trippers.go:469] Request Headers:
	I0819 10:29:02.518991    4789 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:29:02.518998    4789 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:29:02.522659    4789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:29:02.523326    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523339    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523348    4789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:29:02.523351    4789 node_conditions.go:123] node cpu capacity is 2
	I0819 10:29:02.523354    4789 node_conditions.go:105] duration metric: took 183.311856ms to run NodePressure ...
	I0819 10:29:02.523361    4789 start.go:241] waiting for startup goroutines ...
	I0819 10:29:02.523378    4789 start.go:255] writing updated cluster config ...
	I0819 10:29:02.544110    4789 out.go:201] 
	I0819 10:29:02.566227    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:02.566358    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.588965    4789 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:29:02.630777    4789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:29:02.630803    4789 cache.go:56] Caching tarball of preloaded images
	I0819 10:29:02.630953    4789 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:29:02.630966    4789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:29:02.631053    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:02.631767    4789 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:29:02.631849    4789 start.go:364] duration metric: took 64.609µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:29:02.631869    4789 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:29:02.631978    4789 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I0819 10:29:02.652968    4789 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 10:29:02.653116    4789 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:29:02.653158    4789 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:29:02.663539    4789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51204
	I0819 10:29:02.663925    4789 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:29:02.664263    4789 main.go:141] libmachine: Using API Version  1
	I0819 10:29:02.664277    4789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:29:02.664539    4789 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:29:02.664672    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:02.664758    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:02.664867    4789 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0819 10:29:02.664899    4789 client.go:168] LocalClient.Create starting
	I0819 10:29:02.664932    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem
	I0819 10:29:02.664992    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665005    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665051    4789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem
	I0819 10:29:02.665087    4789 main.go:141] libmachine: Decoding PEM data...
	I0819 10:29:02.665103    4789 main.go:141] libmachine: Parsing certificate...
	I0819 10:29:02.665116    4789 main.go:141] libmachine: Running pre-create checks...
	I0819 10:29:02.665122    4789 main.go:141] libmachine: (ha-431000-m03) Calling .PreCreateCheck
	I0819 10:29:02.665218    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.665228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:02.674109    4789 main.go:141] libmachine: Creating machine...
	I0819 10:29:02.674126    4789 main.go:141] libmachine: (ha-431000-m03) Calling .Create
	I0819 10:29:02.674302    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:02.674550    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.674293    4918 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:29:02.674675    4789 main.go:141] libmachine: (ha-431000-m03) Downloading /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 10:29:02.956098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:02.955977    4918 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa...
	I0819 10:29:03.041212    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.041121    4918 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk...
	I0819 10:29:03.041230    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing magic tar header
	I0819 10:29:03.041239    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Writing SSH key tar header
	I0819 10:29:03.042098    4789 main.go:141] libmachine: (ha-431000-m03) DBG | I0819 10:29:03.042003    4918 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03 ...
	I0819 10:29:03.582755    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.582783    4789 main.go:141] libmachine: (ha-431000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:29:03.582846    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:29:03.618942    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:29:03.618960    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:29:03.619021    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619049    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:29:03.619085    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:29:03.619116    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:29:03.619133    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:29:03.621990    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 DEBUG: hyperkit: Pid is 4921
	I0819 10:29:03.622461    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:29:03.622497    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:03.622585    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:03.623424    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:03.623486    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:03.623500    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:03.623537    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:03.623548    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:03.623558    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:03.623568    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:03.629643    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:29:03.638725    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:29:03.639577    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:03.639599    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:03.639609    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:03.639622    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.022361    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:29:04.022375    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:29:04.137228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:29:04.137262    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:29:04.137274    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:29:04.137284    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:29:04.138001    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:29:04.138016    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:29:05.623879    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 1
	I0819 10:29:05.623896    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:05.624023    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:05.624809    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:05.624873    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:05.624888    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:05.624904    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:05.624917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:05.624926    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:05.624935    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:07.626679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 2
	I0819 10:29:07.626696    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:07.626779    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:07.627539    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:07.627582    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:07.627592    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:07.627610    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:07.627619    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:07.627626    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:07.627635    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.627812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 3
	I0819 10:29:09.627828    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:09.627917    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:09.628679    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:09.628746    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:09.628777    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:09.628791    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:09.628799    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:09.628806    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:09.628812    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:09.722721    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:29:09.722792    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:29:09.722802    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:29:09.745848    4789 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:29:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:29:11.630390    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 4
	I0819 10:29:11.630407    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:11.630495    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:11.631275    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:11.631321    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0819 10:29:11.631331    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d220}
	I0819 10:29:11.631340    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:29:11.631359    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:a6:51:e0:9e:29:6e ID:1,a6:51:e0:9e:29:6e Lease:0x66c4cbf5}
	I0819 10:29:11.631366    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:4b:33:78:a7:be ID:1,ca:4b:33:78:a7:be Lease:0x66c4cb30}
	I0819 10:29:11.631387    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:ee:78:ef:b7:7a:3c ID:1,ee:78:ef:b7:7a:3c Lease:0x66c4c9bf}
	I0819 10:29:13.633236    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 5
	I0819 10:29:13.633251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.633339    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.634147    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:29:13.634209    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I0819 10:29:13.634221    4789 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:29:13.634228    4789 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:29:13.634232    4789 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:29:13.634299    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:13.634943    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635064    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:13.635157    4789 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 10:29:13.635165    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:29:13.635251    4789 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:29:13.635310    4789 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:29:13.636120    4789 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 10:29:13.636129    4789 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 10:29:13.636133    4789 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 10:29:13.636138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:13.636228    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:13.636309    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636392    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:13.636477    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:13.636587    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:13.636755    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:13.636763    4789 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 10:29:14.697546    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.697558    4789 main.go:141] libmachine: Detecting the provisioner...
	I0819 10:29:14.697564    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.697702    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.697798    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.697887    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.698009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.698168    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.698318    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.698326    4789 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 10:29:14.765778    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 10:29:14.765827    4789 main.go:141] libmachine: found compatible host: buildroot
	I0819 10:29:14.765833    4789 main.go:141] libmachine: Provisioning with buildroot...
	I0819 10:29:14.765839    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.765977    4789 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:29:14.765988    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.766081    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.766185    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.766270    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766369    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.766481    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.766635    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.766783    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.766792    4789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:29:14.841753    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:29:14.841769    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.841901    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:14.842009    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842101    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:14.842195    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:14.842324    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:14.842477    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:14.842489    4789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:29:14.911764    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:29:14.911779    4789 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:29:14.911793    4789 buildroot.go:174] setting up certificates
	I0819 10:29:14.911800    4789 provision.go:84] configureAuth start
	I0819 10:29:14.911807    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:29:14.911942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:14.912037    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:14.912110    4789 provision.go:143] copyHostCerts
	I0819 10:29:14.912141    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912187    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:29:14.912193    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:29:14.912326    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:29:14.912504    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912534    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:29:14.912539    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:29:14.912651    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:29:14.912808    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912854    4789 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:29:14.912859    4789 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:29:14.912935    4789 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:29:14.913083    4789 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:29:15.064390    4789 provision.go:177] copyRemoteCerts
	I0819 10:29:15.064440    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:29:15.064455    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.064599    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.064695    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.064786    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.064886    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:15.103656    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:29:15.103727    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:29:15.123430    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:29:15.123497    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:29:15.143265    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:29:15.143333    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:29:15.162885    4789 provision.go:87] duration metric: took 251.064942ms to configureAuth
	I0819 10:29:15.162900    4789 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:29:15.163052    4789 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:29:15.163065    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:15.163221    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.163329    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.163417    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163506    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.163582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.163693    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.163824    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.163831    4789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:29:15.225270    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:29:15.225286    4789 buildroot.go:70] root file system type: tmpfs
	I0819 10:29:15.225356    4789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:29:15.225368    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.225510    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.225619    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225708    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.225810    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.225948    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.226090    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.226134    4789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:29:15.299640    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:29:15.299658    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:15.299797    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:15.299889    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.299978    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:15.300067    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:15.300202    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:15.300355    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:15.300368    4789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:29:16.819930    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:29:16.819945    4789 main.go:141] libmachine: Checking connection to Docker...
	I0819 10:29:16.819953    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetURL
	I0819 10:29:16.820095    4789 main.go:141] libmachine: Docker is up and running!
	I0819 10:29:16.820107    4789 main.go:141] libmachine: Reticulating splines...
	I0819 10:29:16.820113    4789 client.go:171] duration metric: took 14.154897138s to LocalClient.Create
	I0819 10:29:16.820124    4789 start.go:167] duration metric: took 14.154947877s to libmachine.API.Create "ha-431000"
	I0819 10:29:16.820129    4789 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:29:16.820136    4789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:29:16.820145    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.820288    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:29:16.820301    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.820396    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.820494    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.820582    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.820664    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:16.862693    4789 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:29:16.866416    4789 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:29:16.866431    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:29:16.866540    4789 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:29:16.866725    4789 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:29:16.866732    4789 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:29:16.866944    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:29:16.874578    4789 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:29:16.904910    4789 start.go:296] duration metric: took 84.771069ms for postStartSetup
	I0819 10:29:16.904942    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:29:16.905569    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.905740    4789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:29:16.906122    4789 start.go:128] duration metric: took 14.273822612s to createHost
	I0819 10:29:16.906138    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:16.906230    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:16.906303    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906387    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:16.906475    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:16.906573    4789 main.go:141] libmachine: Using SSH client type: native
	I0819 10:29:16.906690    4789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10d80ea0] 0x10d83c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:29:16.906697    4789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:29:16.969389    4789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724088556.958185685
	
	I0819 10:29:16.969401    4789 fix.go:216] guest clock: 1724088556.958185685
	I0819 10:29:16.969406    4789 fix.go:229] Guest: 2024-08-19 10:29:16.958185685 -0700 PDT Remote: 2024-08-19 10:29:16.906131 -0700 PDT m=+127.499217490 (delta=52.054685ms)
	I0819 10:29:16.969416    4789 fix.go:200] guest clock delta is within tolerance: 52.054685ms
	I0819 10:29:16.969419    4789 start.go:83] releasing machines lock for "ha-431000-m03", held for 14.337247496s
	I0819 10:29:16.969437    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:16.969573    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:29:16.992258    4789 out.go:177] * Found network options:
	I0819 10:29:17.014265    4789 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:29:17.037508    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.037542    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.037561    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038432    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038682    4789 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:29:17.038835    4789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:29:17.038873    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:29:17.038922    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:29:17.038957    4789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:29:17.039067    4789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:29:17.039087    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:29:17.039116    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039298    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:29:17.039332    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039497    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:29:17.039590    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039679    4789 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:29:17.039721    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:29:17.039809    4789 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:29:17.074320    4789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:29:17.074385    4789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:29:17.120302    4789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:29:17.120318    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.120398    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.135851    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:29:17.144402    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:29:17.152735    4789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.152784    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:29:17.161185    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.169599    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:29:17.177908    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:29:17.186319    4789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:29:17.194967    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:29:17.203702    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:29:17.212228    4789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:29:17.220632    4789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:29:17.228164    4789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:29:17.235717    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.329551    4789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:29:17.348829    4789 start.go:495] detecting cgroup driver to use...
	I0819 10:29:17.348909    4789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:29:17.363903    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.374976    4789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:29:17.393061    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:29:17.404238    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.414728    4789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:29:17.438632    4789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:29:17.449143    4789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:29:17.464536    4789 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:29:17.467445    4789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:29:17.474809    4789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:29:17.488421    4789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:29:17.581504    4789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:29:17.684960    4789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:29:17.684980    4789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:29:17.699658    4789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:29:17.803979    4789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:30:18.773891    4789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.968555005s)
	I0819 10:30:18.774012    4789 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0819 10:30:18.808676    4789 out.go:201] 
	W0819 10:30:18.829152    4789 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 19 17:29:15 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570013158Z" level=info msg="Starting up"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.570447745Z" level=info msg="containerd not running, starting managed containerd"
	Aug 19 17:29:15 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:15.572542412Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.584880924Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603137975Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603181724Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603219390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603233227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603303033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603338653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603471354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603509282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603521199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603528665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603591360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.603811486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605351283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605389063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605504861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605538594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605610859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.605677674Z" level=info msg="metadata content store policy set" policy=shared
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607907354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607976584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.607991948Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608010711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608023403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608093276Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608724366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608874333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608913351Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608929178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608943960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.608968346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609006571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609021660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609032833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609044499Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609055485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609066063Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609088279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609103865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609115537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609130257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609139734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609151164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609161605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609173829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609185591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609200246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609211000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609224200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609237871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609251525Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609296616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609316285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609327369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609362155Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609478815Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609512436Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609530768Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609541857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609553085Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.609563545Z" level=info msg="NRI interface is disabled by configuration."
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610591556Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610680787Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 19 17:29:15 ha-431000-m03 dockerd[521]: time="2024-08-19T17:29:15.610769049Z" level=info msg="containerd successfully booted in 0.026402s"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.601341697Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.606766805Z" level=info msg="Loading containers: start."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.688780306Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.769433920Z" level=info msg="Loading containers: done."
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776749571Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.776865122Z" level=info msg="Daemon has completed initialization"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.804822251Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 19 17:29:16 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:16.805010917Z" level=info msg="API listen on [::]:2376"
	Aug 19 17:29:16 ha-431000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.814047535Z" level=info msg="Processing signal 'terminated'"
	Aug 19 17:29:17 ha-431000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815466623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815881336Z" level=info msg="Daemon shutdown complete"
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.815956644Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 19 17:29:17 ha-431000-m03 dockerd[514]: time="2024-08-19T17:29:17.816022765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 19 17:29:18 ha-431000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 19 17:29:18 ha-431000-m03 dockerd[921]: time="2024-08-19T17:29:18.853267859Z" level=info msg="Starting up"
	Aug 19 17:30:18 ha-431000-m03 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 19 17:30:18 ha-431000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0819 10:30:18.829235    4789 out.go:270] * 
	W0819 10:30:18.830413    4789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:30:18.888275    4789 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.004579691Z" level=info msg="shim disconnected" id=e7cacf032435fe5fd74c9ff947e51071e84739d9cdfb1d3f0b1c3f7f72df50f6 namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1269]: time="2024-08-19T17:43:16.004599876Z" level=info msg="ignoring event" container=e7cacf032435fe5fd74c9ff947e51071e84739d9cdfb1d3f0b1c3f7f72df50f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.004799413Z" level=warning msg="cleaning up after shim disconnected" id=e7cacf032435fe5fd74c9ff947e51071e84739d9cdfb1d3f0b1c3f7f72df50f6 namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.004913234Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.023070076Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:43:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.540369658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.546150369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.546220724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:43:16 ha-431000 dockerd[1275]: time="2024-08-19T17:43:16.546357823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:44:50 ha-431000 dockerd[1269]: time="2024-08-19T17:44:50.401856293Z" level=info msg="ignoring event" container=262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:44:50 ha-431000 dockerd[1275]: time="2024-08-19T17:44:50.402692128Z" level=info msg="shim disconnected" id=262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd namespace=moby
	Aug 19 17:44:50 ha-431000 dockerd[1275]: time="2024-08-19T17:44:50.402914048Z" level=warning msg="cleaning up after shim disconnected" id=262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd namespace=moby
	Aug 19 17:44:50 ha-431000 dockerd[1275]: time="2024-08-19T17:44:50.402953271Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:44:50 ha-431000 dockerd[1275]: time="2024-08-19T17:44:50.479732127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:44:50 ha-431000 dockerd[1275]: time="2024-08-19T17:44:50.480738566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:44:50 ha-431000 dockerd[1275]: time="2024-08-19T17:44:50.480772701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:44:50 ha-431000 dockerd[1275]: time="2024-08-19T17:44:50.481071417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:45:52 ha-431000 dockerd[1269]: time="2024-08-19T17:45:52.031811758Z" level=info msg="ignoring event" container=1bb1a081d563e446b81d2b1bc9459c30ebceeea77aff1524782cdaee587f8f99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:45:52 ha-431000 dockerd[1275]: time="2024-08-19T17:45:52.031827148Z" level=info msg="shim disconnected" id=1bb1a081d563e446b81d2b1bc9459c30ebceeea77aff1524782cdaee587f8f99 namespace=moby
	Aug 19 17:45:52 ha-431000 dockerd[1275]: time="2024-08-19T17:45:52.032026711Z" level=warning msg="cleaning up after shim disconnected" id=1bb1a081d563e446b81d2b1bc9459c30ebceeea77aff1524782cdaee587f8f99 namespace=moby
	Aug 19 17:45:52 ha-431000 dockerd[1275]: time="2024-08-19T17:45:52.032036902Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:45:52 ha-431000 dockerd[1275]: time="2024-08-19T17:45:52.597383319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:45:52 ha-431000 dockerd[1275]: time="2024-08-19T17:45:52.597454331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:45:52 ha-431000 dockerd[1275]: time="2024-08-19T17:45:52.597467024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:45:52 ha-431000 dockerd[1275]: time="2024-08-19T17:45:52.597830225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4c18dbcc00045       604f5db92eaa8                                                                                         43 seconds ago       Running             kube-apiserver            2                   5a0fe916eaf1d       kube-apiserver-ha-431000
	1bb1a081d563e       604f5db92eaa8                                                                                         About a minute ago   Exited              kube-apiserver            1                   5a0fe916eaf1d       kube-apiserver-ha-431000
	e3a7fa32f1ca2       6e38f40d628db                                                                                         3 minutes ago        Running             storage-provisioner       1                   868ee98671e83       storage-provisioner
	73731822fbc4d       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  1                   90ec229d87c2c       kube-vip-ha-431000
	da6e4a61b6cf8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   16 minutes ago       Running             busybox                   0                   6d38fc70c811c       busybox-7dff88458-x7m6m
	b9d1bccf00c94       cbb01a7bd410d                                                                                         18 minutes ago       Running             coredns                   0                   74fd2f09b011a       coredns-6f6b679f8f-hr2qx
	e7cacf032435f       6e38f40d628db                                                                                         18 minutes ago       Exited              storage-provisioner       0                   868ee98671e83       storage-provisioner
	a3891ab602da5       cbb01a7bd410d                                                                                         18 minutes ago       Running             coredns                   0                   c3745c7f8fb9f       coredns-6f6b679f8f-vc76p
	37cd2e9ed2f34       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              18 minutes ago       Running             kindnet-cni               0                   568b6f1ff9aaf       kindnet-lvdbg
	889ab608901bb       ad83b2ca7b09e                                                                                         18 minutes ago       Running             kube-proxy                0                   fde7b27c3d1a5       kube-proxy-5l56s
	ed733554ed160       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     18 minutes ago       Exited              kube-vip                  0                   90ec229d87c2c       kube-vip-ha-431000
	11d9cd3b2f49f       1766f54c897f0                                                                                         18 minutes ago       Running             kube-scheduler            0                   4c252909f338f       kube-scheduler-ha-431000
	39fe08877284d       2e96e5913fc06                                                                                         18 minutes ago       Running             etcd                      0                   fc30d54d1b565       etcd-ha-431000
	2801f8f44773b       045733566833c                                                                                         18 minutes ago       Running             kube-controller-manager   0                   80d21805f230b       kube-controller-manager-ha-431000
	
	
	==> coredns [a3891ab602da] <==
	[INFO] 10.244.1.2:40959 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135434s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[384323591]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.607) (total time: 12726ms):
	Trace[384323591]: ---"Objects listed" error:Unauthorized 12726ms (17:45:24.333)
	Trace[384323591]: [12.726289493s] [12.726289493s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[183169271]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.561) (total time: 12772ms):
	Trace[183169271]: ---"Objects listed" error:Unauthorized 12772ms (17:45:24.334)
	Trace[183169271]: [12.77286543s] [12.77286543s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[321930627]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.615) (total time: 12720ms):
	Trace[321930627]: ---"Objects listed" error:Unauthorized 12719ms (17:45:24.334)
	Trace[321930627]: [12.72052183s] [12.72052183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	
	
	==> coredns [b9d1bccf00c9] <==
	[INFO] 10.244.0.4:32818 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00017028s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[593417891]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.204) (total time: 13131ms):
	Trace[593417891]: ---"Objects listed" error:Unauthorized 13130ms (17:45:24.335)
	Trace[593417891]: [13.131401942s] [13.131401942s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1133648867]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.419) (total time: 12917ms):
	Trace[1133648867]: ---"Objects listed" error:Unauthorized 12916ms (17:45:24.335)
	Trace[1133648867]: [12.917404362s] [12.917404362s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1960632058]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.301) (total time: 13035ms):
	Trace[1960632058]: ---"Objects listed" error:Unauthorized 13034ms (17:45:24.335)
	Trace[1960632058]: [13.035512102s] [13.035512102s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:27:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:46:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:46:17 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:46:17 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:46:17 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:46:17 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7b5b85e2c64405f969f3e24eb671b2e
	  System UUID:                7f844fbb-0000-0000-b5d6-699bdfe1640c
	  Boot ID:                    cb211998-dc9c-4fd5-a169-3f6eeb2403fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x7m6m              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-6f6b679f8f-hr2qx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 coredns-6f6b679f8f-vc76p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 etcd-ha-431000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-lvdbg                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-431000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-431000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-5l56s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-431000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-431000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  RegisteredNode           56s                node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeNotReady             53s                node-controller  Node ha-431000 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  19s (x2 over 18m)  kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x2 over 18m)  kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x2 over 18m)  kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19s (x2 over 18m)  kubelet          Node ha-431000 status is now: NodeReady
	
	
	Name:               ha-431000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:28:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:46:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:45:42 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:45:42 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:45:42 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:45:42 +0000   Mon, 19 Aug 2024 17:45:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-431000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 23b28f7a64ff41debdc4b1f7578d67f9
	  System UUID:                decf4e23-0000-0000-95db-084dbcc69753
	  Boot ID:                    6d3beb70-5d16-4fbe-a6a7-3dc221ea93ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2l9lq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-431000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kindnet-qmgqd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	  kube-system                 kube-apiserver-ha-431000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-431000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-5h7j2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-431000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-431000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  68s (x8 over 69s)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 69s)  kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x7 over 69s)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	
	
	Name:               ha-431000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_42_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:42:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:46:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:46:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:46:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:46:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:46:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-431000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e639484a1c98402fa6d9e2bb5fe71e03
	  System UUID:                c32a4140-0000-0000-838a-ef53ae6c724a
	  Boot ID:                    65e77bd5-3b1f-49d0-a224-e0cd2d7b346a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wfcpq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kindnet-kcrzx              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m7s
	  kube-system                 kube-proxy-2fn5w           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  NodeNotReady             59s                  node-controller  Node ha-431000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           56s                  node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  NodeHasSufficientMemory  33s (x3 over 4m7s)   kubelet          Node ha-431000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x3 over 4m7s)   kubelet          Node ha-431000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x3 over 4m7s)   kubelet          Node ha-431000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeReady                33s (x2 over 3m44s)  kubelet          Node ha-431000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.519395] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.106046] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.754357] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.260100] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.108326] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.116397] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.050322] kauditd_printk_skb: 139 callbacks suppressed
	[  +2.370658] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.100232] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.114416] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.133019] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.706453] systemd-fstab-generator[1261]: Ignoring "noauto" option for root device
	[  +0.055873] kauditd_printk_skb: 136 callbacks suppressed
	[  +2.542020] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +4.524199] systemd-fstab-generator[1651]: Ignoring "noauto" option for root device
	[  +0.058523] kauditd_printk_skb: 70 callbacks suppressed
	[  +7.145787] systemd-fstab-generator[2146]: Ignoring "noauto" option for root device
	[  +0.090131] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.001426] kauditd_printk_skb: 35 callbacks suppressed
	[Aug19 17:28] kauditd_printk_skb: 15 callbacks suppressed
	[ +36.695422] kauditd_printk_skb: 24 callbacks suppressed
	[Aug19 17:44] kauditd_printk_skb: 2 callbacks suppressed
	[Aug19 17:46] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [39fe08877284] <==
	{"level":"info","ts":"2024-08-19T17:46:35.941786Z","caller":"traceutil/trace.go:171","msg":"trace[1570856760] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:5167; }","duration":"131.096327ms","start":"2024-08-19T17:46:35.810683Z","end":"2024-08-19T17:46:35.941779Z","steps":["trace[1570856760] 'agreement among raft nodes before linearized reading'  (duration: 130.501428ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:35.942179Z","caller":"traceutil/trace.go:171","msg":"trace[359797004] transaction","detail":"{read_only:false; response_revision:5168; number_of_response:1; }","duration":"172.746267ms","start":"2024-08-19T17:46:35.769426Z","end":"2024-08-19T17:46:35.942172Z","steps":["trace[359797004] 'process raft request'  (duration: 172.584982ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:35.942396Z","caller":"traceutil/trace.go:171","msg":"trace[1774665831] transaction","detail":"{read_only:false; response_revision:5169; number_of_response:1; }","duration":"170.041917ms","start":"2024-08-19T17:46:35.772348Z","end":"2024-08-19T17:46:35.942390Z","steps":["trace[1774665831] 'process raft request'  (duration: 169.793056ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:35.942565Z","caller":"traceutil/trace.go:171","msg":"trace[773130752] transaction","detail":"{read_only:false; response_revision:5170; number_of_response:1; }","duration":"168.825951ms","start":"2024-08-19T17:46:35.773733Z","end":"2024-08-19T17:46:35.942559Z","steps":["trace[773130752] 'process raft request'  (duration: 168.54732ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:35.942920Z","caller":"traceutil/trace.go:171","msg":"trace[749631249] transaction","detail":"{read_only:false; response_revision:5171; number_of_response:1; }","duration":"166.743605ms","start":"2024-08-19T17:46:35.776172Z","end":"2024-08-19T17:46:35.942915Z","steps":["trace[749631249] 'process raft request'  (duration: 166.366847ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:35.943082Z","caller":"traceutil/trace.go:171","msg":"trace[1825282057] transaction","detail":"{read_only:false; response_revision:5172; number_of_response:1; }","duration":"166.850938ms","start":"2024-08-19T17:46:35.776227Z","end":"2024-08-19T17:46:35.943078Z","steps":["trace[1825282057] 'process raft request'  (duration: 166.58486ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.462614Z","caller":"traceutil/trace.go:171","msg":"trace[1854764867] transaction","detail":"{read_only:false; response_revision:5265; number_of_response:1; }","duration":"118.740693ms","start":"2024-08-19T17:46:36.343862Z","end":"2024-08-19T17:46:36.462603Z","steps":["trace[1854764867] 'process raft request'  (duration: 118.447523ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.464269Z","caller":"traceutil/trace.go:171","msg":"trace[1182891841] linearizableReadLoop","detail":"{readStateIndex:6005; appliedIndex:6006; }","duration":"115.508651ms","start":"2024-08-19T17:46:36.348753Z","end":"2024-08-19T17:46:36.464262Z","steps":["trace[1182891841] 'read index received'  (duration: 115.506132ms)","trace[1182891841] 'applied index is now lower than readState.Index'  (duration: 1.893µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:46:36.465328Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.564413ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:243 size:176748"}
	{"level":"info","ts":"2024-08-19T17:46:36.465544Z","caller":"traceutil/trace.go:171","msg":"trace[1821756560] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:243; response_revision:5265; }","duration":"116.787137ms","start":"2024-08-19T17:46:36.348751Z","end":"2024-08-19T17:46:36.465538Z","steps":["trace[1821756560] 'agreement among raft nodes before linearized reading'  (duration: 115.804198ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.465351Z","caller":"traceutil/trace.go:171","msg":"trace[600545860] transaction","detail":"{read_only:false; response_revision:5266; number_of_response:1; }","duration":"120.190565ms","start":"2024-08-19T17:46:36.345153Z","end":"2024-08-19T17:46:36.465344Z","steps":["trace[600545860] 'process raft request'  (duration: 119.840121ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.465587Z","caller":"traceutil/trace.go:171","msg":"trace[2070794339] transaction","detail":"{read_only:false; response_revision:5267; number_of_response:1; }","duration":"117.758504ms","start":"2024-08-19T17:46:36.347823Z","end":"2024-08-19T17:46:36.465581Z","steps":["trace[2070794339] 'process raft request'  (duration: 117.2638ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.679595Z","caller":"traceutil/trace.go:171","msg":"trace[22530896] transaction","detail":"{read_only:false; response_revision:5281; number_of_response:1; }","duration":"171.840804ms","start":"2024-08-19T17:46:36.507741Z","end":"2024-08-19T17:46:36.679582Z","steps":["trace[22530896] 'process raft request'  (duration: 171.749538ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.679960Z","caller":"traceutil/trace.go:171","msg":"trace[2105564107] transaction","detail":"{read_only:false; response_revision:5279; number_of_response:1; }","duration":"179.54169ms","start":"2024-08-19T17:46:36.500406Z","end":"2024-08-19T17:46:36.679947Z","steps":["trace[2105564107] 'process raft request'  (duration: 82.804149ms)","trace[2105564107] 'compare'  (duration: 96.118432ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:46:36.680423Z","caller":"traceutil/trace.go:171","msg":"trace[1675326400] transaction","detail":"{read_only:false; response_revision:5280; number_of_response:1; }","duration":"176.581964ms","start":"2024-08-19T17:46:36.503832Z","end":"2024-08-19T17:46:36.680414Z","steps":["trace[1675326400] 'process raft request'  (duration: 175.611335ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.692852Z","caller":"traceutil/trace.go:171","msg":"trace[655925894] transaction","detail":"{read_only:false; response_revision:5283; number_of_response:1; }","duration":"147.605873ms","start":"2024-08-19T17:46:36.545238Z","end":"2024-08-19T17:46:36.692844Z","steps":["trace[655925894] 'process raft request'  (duration: 147.270294ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.873231Z","caller":"traceutil/trace.go:171","msg":"trace[1337233270] transaction","detail":"{read_only:false; response_revision:5288; number_of_response:1; }","duration":"174.628901ms","start":"2024-08-19T17:46:36.698486Z","end":"2024-08-19T17:46:36.873115Z","steps":["trace[1337233270] 'process raft request'  (duration: 137.527986ms)","trace[1337233270] 'compare'  (duration: 36.55365ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:46:36.877259Z","caller":"traceutil/trace.go:171","msg":"trace[432248338] linearizableReadLoop","detail":"{readStateIndex:6032; appliedIndex:6034; }","duration":"174.445065ms","start":"2024-08-19T17:46:36.702803Z","end":"2024-08-19T17:46:36.877248Z","steps":["trace[432248338] 'read index received'  (duration: 174.440725ms)","trace[432248338] 'applied index is now lower than readState.Index'  (duration: 3.062µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:46:36.878323Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.504595ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/busybox\" ","response":"range_response_count:1 size:3172"}
	{"level":"info","ts":"2024-08-19T17:46:36.878545Z","caller":"traceutil/trace.go:171","msg":"trace[2029787669] range","detail":"{range_begin:/registry/deployments/default/busybox; range_end:; response_count:1; response_revision:5288; }","duration":"175.732622ms","start":"2024-08-19T17:46:36.702801Z","end":"2024-08-19T17:46:36.878533Z","steps":["trace[2029787669] 'agreement among raft nodes before linearized reading'  (duration: 175.444316ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.878660Z","caller":"traceutil/trace.go:171","msg":"trace[1016164393] transaction","detail":"{read_only:false; response_revision:5294; number_of_response:1; }","duration":"114.675382ms","start":"2024-08-19T17:46:36.763977Z","end":"2024-08-19T17:46:36.878653Z","steps":["trace[1016164393] 'process raft request'  (duration: 114.645751ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.878896Z","caller":"traceutil/trace.go:171","msg":"trace[1121317620] transaction","detail":"{read_only:false; response_revision:5291; number_of_response:1; }","duration":"174.534723ms","start":"2024-08-19T17:46:36.704355Z","end":"2024-08-19T17:46:36.878889Z","steps":["trace[1121317620] 'process raft request'  (duration: 174.000674ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.878914Z","caller":"traceutil/trace.go:171","msg":"trace[1612918809] transaction","detail":"{read_only:false; response_revision:5292; number_of_response:1; }","duration":"172.732809ms","start":"2024-08-19T17:46:36.706173Z","end":"2024-08-19T17:46:36.878906Z","steps":["trace[1612918809] 'process raft request'  (duration: 172.325033ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.879350Z","caller":"traceutil/trace.go:171","msg":"trace[260601305] transaction","detail":"{read_only:false; response_revision:5289; number_of_response:1; }","duration":"179.051022ms","start":"2024-08-19T17:46:36.700292Z","end":"2024-08-19T17:46:36.879343Z","steps":["trace[260601305] 'process raft request'  (duration: 177.958816ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:46:36.879374Z","caller":"traceutil/trace.go:171","msg":"trace[450011482] transaction","detail":"{read_only:false; response_revision:5290; number_of_response:1; }","duration":"176.687721ms","start":"2024-08-19T17:46:36.702682Z","end":"2024-08-19T17:46:36.879370Z","steps":["trace[450011482] 'process raft request'  (duration: 175.638625ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:46:37 up 19 min,  0 users,  load average: 0.49, 0.48, 0.27
	Linux ha-431000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [37cd2e9ed2f3] <==
	I0819 17:45:53.914828       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:03.921203       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:46:03.921488       1 main.go:299] handling current node
	I0819 17:46:03.921597       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:46:03.921638       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:46:03.921943       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:46:03.922047       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:13.920299       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:46:13.920346       1 main.go:299] handling current node
	I0819 17:46:13.920359       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:46:13.920372       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:46:13.920491       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:46:13.920523       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:23.913958       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:46:23.914037       1 main.go:299] handling current node
	I0819 17:46:23.914060       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:46:23.914069       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:46:23.914586       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:46:23.914700       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:33.918534       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:46:33.918663       1 main.go:299] handling current node
	I0819 17:46:33.918861       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:46:33.918971       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:46:33.919255       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:46:33.919335       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [1bb1a081d563] <==
	E0819 17:45:35.193806       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: leader changed]"
	E0819 17:45:35.193846       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: leader changed]"
	E0819 17:45:35.194078       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: leader changed]"
	E0819 17:45:35.194484       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: leader changed]"
	E0819 17:45:35.194843       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: leader changed]"
	I0819 17:45:35.252489       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 17:45:35.402877       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 17:45:35.403027       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 17:45:35.684914       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 17:45:35.701632       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0819 17:45:35.830161       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 17:45:36.276180       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 17:45:36.276393       1 policy_source.go:224] refreshing policies
	I0819 17:45:36.869197       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 17:45:36.902234       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 17:45:36.902338       1 aggregator.go:171] initial CRD sync complete...
	I0819 17:45:36.902399       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 17:45:36.902529       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 17:45:37.484692       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 17:45:37.500529       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 17:45:37.503085       1 cache.go:39] Caches are synced for autoregister controller
	I0819 17:45:37.503921       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 17:45:37.506120       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 17:45:37.764008       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	F0819 17:45:51.884026       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [4c18dbcc0004] <==
	I0819 17:45:54.119098       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0819 17:45:54.119108       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0819 17:45:54.119169       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 17:45:54.121257       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 17:45:54.193415       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 17:45:54.199587       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 17:45:54.199767       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 17:45:54.199972       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 17:45:54.200503       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 17:45:54.200576       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 17:45:54.200721       1 aggregator.go:171] initial CRD sync complete...
	I0819 17:45:54.200783       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 17:45:54.200808       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 17:45:54.200868       1 cache.go:39] Caches are synced for autoregister controller
	I0819 17:45:54.201060       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 17:45:54.201353       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 17:45:54.201666       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 17:45:54.212784       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 17:45:54.212850       1 policy_source.go:224] refreshing policies
	I0819 17:45:54.223904       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 17:45:54.285880       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 17:45:54.299959       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:45:54.998278       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 17:45:55.306541       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	I0819 17:45:55.317972       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2801f8f44773] <==
	I0819 17:46:36.885964       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="53.663µs"
	I0819 17:46:36.891522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="191.456355ms"
	I0819 17:46:36.891777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.471µs"
	I0819 17:46:37.108145       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="207.370125ms"
	I0819 17:46:37.108272       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="44.439µs"
	I0819 17:46:37.108309       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="205.48499ms"
	I0819 17:46:37.108409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.329µs"
	I0819 17:46:37.341794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="188.642735ms"
	I0819 17:46:37.341918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="82.136µs"
	I0819 17:46:37.342681       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="189.515111ms"
	I0819 17:46:37.343411       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="209.869µs"
	I0819 17:46:37.563071       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="18.177936ms"
	I0819 17:46:37.564338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="93.187µs"
	I0819 17:46:37.567230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.661217ms"
	I0819 17:46:37.568255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.263µs"
	I0819 17:46:37.734431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="164.575777ms"
	I0819 17:46:37.734499       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.324µs"
	I0819 17:46:37.757540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="16.227755ms"
	I0819 17:46:37.757576       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.93751ms"
	I0819 17:46:37.759212       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="58.764µs"
	I0819 17:46:37.759243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.49µs"
	I0819 17:46:37.968113       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="48.518654ms"
	I0819 17:46:37.968235       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="160.53005ms"
	I0819 17:46:37.972519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="79.407µs"
	I0819 17:46:37.973002       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="72.169µs"
	
	
	==> kube-proxy [889ab608901b] <==
	E0819 17:44:04.860226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:23.290432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:23.290751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:23.290543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:23.291205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:26.362595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:26.363019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:41.722266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:41.722341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:41.722406       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:41.722425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:54.009699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:54.009972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:45:09.369057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:45:09.369337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:45:30.873553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:45:30.873673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:45:33.945461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:45:33.945676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [11d9cd3b2f49] <==
	E0819 17:45:04.967938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0819 17:45:05.262907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0819 17:45:06.531305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0819 17:45:07.849911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0819 17:45:08.312166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0819 17:45:09.806525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0819 17:45:10.272292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	W0819 17:45:25.011877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:45:25.011937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:28.351281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:45:28.351338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:31.008358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 17:45:31.008417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.186287       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:45:33.186381       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 17:45:36.848394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 17:45:36.848442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0819 17:45:54.148342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.169.0.5:50394->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.169.0.5:50378->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148560       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.169.0.5:50362->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.169.0.5:50356->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.169.0.5:50346->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.149161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.169.0.5:50400->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.149643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.169.0.5:50358->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.149841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.169.0.5:50398->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 19 17:45:24 ha-431000 kubelet[2153]: W0819 17:45:24.728538    2153 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2450": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:45:24 ha-431000 kubelet[2153]: W0819 17:45:24.729197    2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2569": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:45:24 ha-431000 kubelet[2153]: E0819 17:45:24.729325    2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2569\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:45:24 ha-431000 kubelet[2153]: W0819 17:45:24.729333    2153 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2465": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:45:24 ha-431000 kubelet[2153]: W0819 17:45:24.729429    2153 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2586": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:45:24 ha-431000 kubelet[2153]: E0819 17:45:24.729520    2153 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2586\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:45:24 ha-431000 kubelet[2153]: E0819 17:45:24.729223    2153 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2450\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:45:24 ha-431000 kubelet[2153]: E0819 17:45:24.729452    2153 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2465\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:45:24 ha-431000 kubelet[2153]: I0819 17:45:24.729719    2153 status_manager.go:851] "Failed to get status for pod" podUID="e68070ef-bdea-45e6-b7a8-8834534fa616" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 19 17:45:27 ha-431000 kubelet[2153]: I0819 17:45:27.800427    2153 status_manager.go:851] "Failed to get status for pod" podUID="7c2fca8c814adb84661f46fda3b2d591" pod="kube-system/kube-vip-ha-431000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-431000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 19 17:45:30 ha-431000 kubelet[2153]: E0819 17:45:30.872906    2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-431000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 19 17:45:30 ha-431000 kubelet[2153]: E0819 17:45:30.872893    2153 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-431000.17ed32299fbaf8bc\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-431000.17ed32299fbaf8bc  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-431000,UID:4be26ba36a583cb5cf787c7b12260cd6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-431000,},FirstTimestamp:2024-08-19 17:43:06.707646652 +0000 UTC m=+921.301345273,LastTimestamp:2024-08-19 17:43:10.714412846 +0000 UTC m=+925.308111459,Count:2,Type:Warning,EventTime:0001-01-01 00
:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-431000,}"
	Aug 19 17:45:30 ha-431000 kubelet[2153]: I0819 17:45:30.873135    2153 status_manager.go:851] "Failed to get status for pod" podUID="e68070ef-bdea-45e6-b7a8-8834534fa616" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 19 17:45:33 ha-431000 kubelet[2153]: I0819 17:45:33.944568    2153 status_manager.go:851] "Failed to get status for pod" podUID="4be26ba36a583cb5cf787c7b12260cd6" pod="kube-system/kube-apiserver-ha-431000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 19 17:45:33 ha-431000 kubelet[2153]: W0819 17:45:33.944572    2153 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:45:33 ha-431000 kubelet[2153]: E0819 17:45:33.945072    2153 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:45:37 ha-431000 kubelet[2153]: I0819 17:45:37.016472    2153 status_manager.go:851] "Failed to get status for pod" podUID="3b72bc1b6bbd421d78a846f36f8fd589" pod="kube-system/etcd-ha-431000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000\": dial tcp 192.169.0.254:8443: connect: no route to host"
	Aug 19 17:45:45 ha-431000 kubelet[2153]: E0819 17:45:45.531800    2153 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:45:45 ha-431000 kubelet[2153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:45:45 ha-431000 kubelet[2153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:45:45 ha-431000 kubelet[2153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:45:45 ha-431000 kubelet[2153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:45:52 ha-431000 kubelet[2153]: I0819 17:45:52.548292    2153 scope.go:117] "RemoveContainer" containerID="262471364c991634931873ae89eae2fd33683db859a09ad5d79d8a659fdb30bd"
	Aug 19 17:45:52 ha-431000 kubelet[2153]: I0819 17:45:52.548496    2153 scope.go:117] "RemoveContainer" containerID="1bb1a081d563e446b81d2b1bc9459c30ebceeea77aff1524782cdaee587f8f99"
	Aug 19 17:45:52 ha-431000 kubelet[2153]: I0819 17:45:52.549367    2153 status_manager.go:851] "Failed to get status for pod" podUID="4be26ba36a583cb5cf787c7b12260cd6" pod="kube-system/kube-apiserver-ha-431000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000\": dial tcp 192.169.0.254:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (93.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (373.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-431000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-431000 -v=7 --alsologtostderr
E0819 10:47:06.551580    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-431000 -v=7 --alsologtostderr: (33.189421422s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-431000 --wait=true -v=7 --alsologtostderr
E0819 10:50:12.171061    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:50:29.081993    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:50:43.488760    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-431000 --wait=true -v=7 --alsologtostderr: exit status 80 (5m36.251899129s)

                                                
                                                
-- stdout --
	* [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	* Restarting existing hyperkit VM for "ha-431000" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	* Enabled addons: 
	
	* Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	* Restarting existing hyperkit VM for "ha-431000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	* Restarting existing hyperkit VM for "ha-431000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	  - env NO_PROXY=192.169.0.5
	  - env NO_PROXY=192.169.0.5,192.169.0.6
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:47:12.990834    6731 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:47:12.991103    6731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:47:12.991108    6731 out.go:358] Setting ErrFile to fd 2...
	I0819 10:47:12.991112    6731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:47:12.991281    6731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:47:12.992723    6731 out.go:352] Setting JSON to false
	I0819 10:47:13.017592    6731 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4603,"bootTime":1724085030,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:47:13.017712    6731 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:47:13.040160    6731 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:47:13.085144    6731 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:47:13.085199    6731 notify.go:220] Checking for updates...
	I0819 10:47:13.129094    6731 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:13.150001    6731 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:47:13.191985    6731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:47:13.234991    6731 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:47:13.255968    6731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:47:13.277879    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:13.278061    6731 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:47:13.278758    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:13.278849    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:13.288403    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52017
	I0819 10:47:13.288766    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:13.289188    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:13.289197    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:13.289457    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:13.289596    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:13.317906    6731 out.go:177] * Using the hyperkit driver based on existing profile
	I0819 10:47:13.359906    6731 start.go:297] selected driver: hyperkit
	I0819 10:47:13.359936    6731 start.go:901] validating driver "hyperkit" against &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:47:13.360173    6731 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:47:13.360383    6731 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:47:13.360591    6731 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:47:13.373620    6731 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:47:13.379058    6731 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:13.379083    6731 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:47:13.382480    6731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:47:13.382556    6731 cni.go:84] Creating CNI manager for ""
	I0819 10:47:13.382566    6731 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 10:47:13.382642    6731 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:47:13.382745    6731 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:47:13.427064    6731 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:47:13.448053    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:47:13.448130    6731 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:47:13.448197    6731 cache.go:56] Caching tarball of preloaded images
	I0819 10:47:13.448409    6731 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:47:13.448432    6731 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:47:13.448617    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:13.449596    6731 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:47:13.449728    6731 start.go:364] duration metric: took 105.822µs to acquireMachinesLock for "ha-431000"
	I0819 10:47:13.449768    6731 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:47:13.449785    6731 fix.go:54] fixHost starting: 
	I0819 10:47:13.450204    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:13.450230    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:13.463559    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52019
	I0819 10:47:13.464010    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:13.464458    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:13.464469    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:13.464831    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:13.465014    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:13.465167    6731 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:47:13.465295    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:13.465439    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:47:13.466971    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid 4802 missing from process table
	I0819 10:47:13.467037    6731 fix.go:112] recreateIfNeeded on ha-431000: state=Stopped err=<nil>
	I0819 10:47:13.467066    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	W0819 10:47:13.467199    6731 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:47:13.510101    6731 out.go:177] * Restarting existing hyperkit VM for "ha-431000" ...
	I0819 10:47:13.531063    6731 main.go:141] libmachine: (ha-431000) Calling .Start
	I0819 10:47:13.531337    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:13.531403    6731 main.go:141] libmachine: (ha-431000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:47:13.533562    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid 4802 missing from process table
	I0819 10:47:13.533575    6731 main.go:141] libmachine: (ha-431000) DBG | pid 4802 is in state "Stopped"
	I0819 10:47:13.533592    6731 main.go:141] libmachine: (ha-431000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid...
	I0819 10:47:13.534063    6731 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:47:13.685824    6731 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:47:13.685856    6731 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:47:13.685937    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c10e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:13.685980    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c10e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:13.686041    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:47:13.686089    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:47:13.686116    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:47:13.687515    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Pid is 6743
	I0819 10:47:13.687875    6731 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:47:13.687888    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:13.687950    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:47:13.689549    6731 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:47:13.689620    6731 main.go:141] libmachine: (ha-431000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:47:13.689637    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d62c}
	I0819 10:47:13.689650    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 10:47:13.689661    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:47:13.689670    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:47:13.689679    6731 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:47:13.689685    6731 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:47:13.689750    6731 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:47:13.690466    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:13.690696    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:13.691360    6731 machine.go:93] provisionDockerMachine start ...
	I0819 10:47:13.691391    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:13.691550    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:13.691652    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:13.691765    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:13.691853    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:13.691949    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:13.692101    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:13.692310    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:13.692319    6731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:47:13.695286    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:47:13.768567    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:47:13.769376    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:13.769389    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:13.769397    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:13.769403    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:14.169410    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:47:14.169434    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:47:14.284387    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:14.284423    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:14.284433    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:14.284452    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:14.285281    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:47:14.285292    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:47:20.122707    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:47:20.122768    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:47:20.122798    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:47:20.146889    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:47:24.038753    6731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.5:22: connect: connection refused
	I0819 10:47:27.097051    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:47:27.097068    6731 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:47:27.097216    6731 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:47:27.097227    6731 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:47:27.097372    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.097464    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.097585    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.097687    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.097778    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.097909    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.098097    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.098119    6731 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:47:27.159700    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:47:27.159721    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.159879    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.159986    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.160071    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.160158    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.160304    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.160447    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.160458    6731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:47:27.217596    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:47:27.217617    6731 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:47:27.217642    6731 buildroot.go:174] setting up certificates
	I0819 10:47:27.217648    6731 provision.go:84] configureAuth start
	I0819 10:47:27.217654    6731 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:47:27.217789    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:27.217907    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.218009    6731 provision.go:143] copyHostCerts
	I0819 10:47:27.218040    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:27.218106    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:47:27.218115    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:27.219007    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:47:27.219230    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:27.219271    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:47:27.219275    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:27.219362    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:47:27.219509    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:27.219546    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:47:27.219551    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:27.219626    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:47:27.219767    6731 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:47:27.270993    6731 provision.go:177] copyRemoteCerts
	I0819 10:47:27.271039    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:47:27.271051    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.271175    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.271261    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.271352    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.271445    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:27.302754    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:47:27.302826    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:47:27.322815    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:47:27.322877    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0819 10:47:27.342451    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:47:27.342511    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:47:27.362246    6731 provision.go:87] duration metric: took 144.581948ms to configureAuth
	I0819 10:47:27.362260    6731 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:47:27.362446    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:27.362461    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:27.362588    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.362675    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.362776    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.362858    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.362949    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.363077    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.363202    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.363214    6731 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:47:27.413858    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:47:27.413870    6731 buildroot.go:70] root file system type: tmpfs
	I0819 10:47:27.413956    6731 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:47:27.413972    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.414097    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.414209    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.414293    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.414367    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.414499    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.414633    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.414678    6731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:47:27.476805    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:47:27.476825    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.476950    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.477051    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.477141    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.477235    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.477363    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.477517    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.477530    6731 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:47:29.141388    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:47:29.141402    6731 machine.go:96] duration metric: took 15.449700536s to provisionDockerMachine
	I0819 10:47:29.141419    6731 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:47:29.141427    6731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:47:29.141442    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.141639    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:47:29.141653    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.141751    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.141838    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.141944    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.142024    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.177773    6731 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:47:29.182929    6731 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:47:29.182945    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:47:29.183045    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:47:29.183232    6731 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:47:29.183239    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:47:29.183446    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:47:29.193329    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:29.226539    6731 start.go:296] duration metric: took 85.108142ms for postStartSetup
	I0819 10:47:29.226566    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.226743    6731 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:47:29.226766    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.226881    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.226983    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.227075    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.227158    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.259218    6731 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:47:29.259277    6731 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:47:29.313364    6731 fix.go:56] duration metric: took 15.863243842s for fixHost
	I0819 10:47:29.313386    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.313537    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.313631    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.313718    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.313802    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.313927    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:29.314073    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:29.314080    6731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:47:29.366201    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089649.282494519
	
	I0819 10:47:29.366218    6731 fix.go:216] guest clock: 1724089649.282494519
	I0819 10:47:29.366223    6731 fix.go:229] Guest: 2024-08-19 10:47:29.282494519 -0700 PDT Remote: 2024-08-19 10:47:29.313376 -0700 PDT m=+16.361598467 (delta=-30.881481ms)
	I0819 10:47:29.366239    6731 fix.go:200] guest clock delta is within tolerance: -30.881481ms
	I0819 10:47:29.366243    6731 start.go:83] releasing machines lock for "ha-431000", held for 15.916161384s
	I0819 10:47:29.366262    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366404    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:29.366507    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366799    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366892    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366979    6731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:47:29.367012    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.367029    6731 ssh_runner.go:195] Run: cat /version.json
	I0819 10:47:29.367039    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.367114    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.367149    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.367227    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.367237    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.367322    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.367335    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.367423    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.367436    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.444266    6731 ssh_runner.go:195] Run: systemctl --version
	I0819 10:47:29.449674    6731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:47:29.454027    6731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:47:29.454072    6731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:47:29.466466    6731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:47:29.466477    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:29.466578    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:29.483411    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:47:29.492453    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:47:29.501213    6731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:47:29.501260    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:47:29.510090    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:29.519075    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:47:29.528065    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:29.536949    6731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:47:29.545786    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:47:29.554573    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:47:29.563322    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:47:29.572057    6731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:47:29.579919    6731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:47:29.588348    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:29.686832    6731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:47:29.707105    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:29.707180    6731 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:47:29.719452    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:29.730098    6731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:47:29.745544    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:29.756577    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:29.767542    6731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:47:29.790919    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:29.802179    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:29.816853    6731 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:47:29.819743    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:47:29.827667    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:47:29.841027    6731 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:47:29.941968    6731 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:47:30.045493    6731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:47:30.045564    6731 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:47:30.059349    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:30.153983    6731 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:47:32.475528    6731 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.321474833s)
	I0819 10:47:32.475593    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:47:32.486499    6731 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:47:32.499892    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:32.510342    6731 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:47:32.602953    6731 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:47:32.726572    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:32.829541    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:47:32.850769    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:32.861330    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:32.957342    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:47:33.019734    6731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:47:33.019811    6731 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:47:33.024665    6731 start.go:563] Will wait 60s for crictl version
	I0819 10:47:33.024717    6731 ssh_runner.go:195] Run: which crictl
	I0819 10:47:33.028242    6731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:47:33.053696    6731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:47:33.053765    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:33.070786    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:33.110368    6731 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:47:33.110419    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:33.110842    6731 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:47:33.115455    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:33.125038    6731 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:47:33.125131    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:47:33.125186    6731 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:47:33.138502    6731 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0819 10:47:33.138514    6731 docker.go:615] Images already preloaded, skipping extraction
	I0819 10:47:33.138587    6731 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:47:33.152253    6731 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0819 10:47:33.152273    6731 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:47:33.152286    6731 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:47:33.152388    6731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:47:33.152487    6731 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:47:33.188995    6731 cni.go:84] Creating CNI manager for ""
	I0819 10:47:33.189008    6731 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 10:47:33.189020    6731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:47:33.189037    6731 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:47:33.189121    6731 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:47:33.189137    6731 kube-vip.go:115] generating kube-vip config ...
	I0819 10:47:33.189189    6731 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:47:33.201830    6731 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:47:33.201940    6731 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:47:33.201997    6731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:47:33.210450    6731 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:47:33.210495    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:47:33.217871    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:47:33.231674    6731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:47:33.245013    6731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:47:33.259054    6731 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:47:33.272685    6731 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:47:33.275698    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:33.285047    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:33.385931    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:47:33.400131    6731 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:47:33.400143    6731 certs.go:194] generating shared ca certs ...
	I0819 10:47:33.400154    6731 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:33.400345    6731 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:47:33.400418    6731 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:47:33.400428    6731 certs.go:256] generating profile certs ...
	I0819 10:47:33.400545    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:47:33.400566    6731 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59
	I0819 10:47:33.400581    6731 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0819 10:47:33.706693    6731 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59 ...
	I0819 10:47:33.706714    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59: {Name:mk3ef913d0a2b6704747c9cac46f692f95ca83d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:33.707051    6731 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59 ...
	I0819 10:47:33.707062    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59: {Name:mk47cdc11bd849114252b3917882ba0c41ebb9fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:33.707265    6731 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:47:33.707470    6731 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:47:33.707706    6731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:47:33.707719    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:47:33.707742    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:47:33.707763    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:47:33.707783    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:47:33.707800    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:47:33.707818    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:47:33.707836    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:47:33.707854    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:47:33.707965    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:47:33.708012    6731 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:47:33.708021    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:47:33.708051    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:47:33.708080    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:47:33.708108    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:47:33.708172    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:33.708203    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:47:33.708224    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:47:33.708242    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:33.708696    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:47:33.750639    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:47:33.793357    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:47:33.817739    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:47:33.839363    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 10:47:33.859538    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 10:47:33.879468    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:47:33.899477    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:47:33.919387    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:47:33.939367    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:47:33.959111    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:47:33.978053    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:47:33.991986    6731 ssh_runner.go:195] Run: openssl version
	I0819 10:47:33.996321    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:47:34.004824    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:47:34.008214    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:47:34.008253    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:47:34.012526    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:47:34.020744    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:47:34.029254    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:47:34.032767    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:47:34.032806    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:47:34.037138    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:47:34.045595    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:47:34.053763    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:34.057262    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:34.057304    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:34.061509    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:47:34.070103    6731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:47:34.073578    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 10:47:34.078201    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 10:47:34.082612    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 10:47:34.087103    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 10:47:34.091437    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 10:47:34.095760    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 10:47:34.100115    6731 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:47:34.100230    6731 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:47:34.113393    6731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:47:34.120906    6731 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 10:47:34.120917    6731 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 10:47:34.120957    6731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 10:47:34.128485    6731 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:47:34.128797    6731 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-431000" does not appear in /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:34.128883    6731 kubeconfig.go:62] /Users/jenkins/minikube-integration/19478-1622/kubeconfig needs updating (will repair): [kubeconfig missing "ha-431000" cluster setting kubeconfig missing "ha-431000" context setting]
	I0819 10:47:34.129058    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:34.129469    6731 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:34.129662    6731 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1139f2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:47:34.129951    6731 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:47:34.130122    6731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 10:47:34.137350    6731 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0819 10:47:34.137364    6731 kubeadm.go:597] duration metric: took 16.443406ms to restartPrimaryControlPlane
	I0819 10:47:34.137370    6731 kubeadm.go:394] duration metric: took 37.259659ms to StartCluster
	I0819 10:47:34.137379    6731 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:34.137458    6731 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:34.137795    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:34.138049    6731 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:47:34.138062    6731 start.go:241] waiting for startup goroutines ...
	I0819 10:47:34.138093    6731 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:47:34.138228    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:34.182792    6731 out.go:177] * Enabled addons: 
	I0819 10:47:34.203662    6731 addons.go:510] duration metric: took 65.572958ms for enable addons: enabled=[]
	I0819 10:47:34.203791    6731 start.go:246] waiting for cluster config update ...
	I0819 10:47:34.203803    6731 start.go:255] writing updated cluster config ...
	I0819 10:47:34.226648    6731 out.go:201] 
	I0819 10:47:34.250149    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:34.250276    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:34.272715    6731 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:47:34.314737    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:47:34.314772    6731 cache.go:56] Caching tarball of preloaded images
	I0819 10:47:34.314979    6731 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:47:34.315025    6731 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:47:34.315140    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:34.316055    6731 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:47:34.316175    6731 start.go:364] duration metric: took 95.252µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:47:34.316201    6731 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:47:34.316218    6731 fix.go:54] fixHost starting: m02
	I0819 10:47:34.316649    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:34.316675    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:34.325824    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52042
	I0819 10:47:34.326364    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:34.326725    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:34.326734    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:34.326990    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:34.327207    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:34.327371    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:47:34.327556    6731 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:34.327684    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6436
	I0819 10:47:34.328623    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 6436 missing from process table
	I0819 10:47:34.328664    6731 fix.go:112] recreateIfNeeded on ha-431000-m02: state=Stopped err=<nil>
	I0819 10:47:34.328674    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	W0819 10:47:34.328799    6731 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:47:34.376702    6731 out.go:177] * Restarting existing hyperkit VM for "ha-431000-m02" ...
	I0819 10:47:34.397748    6731 main.go:141] libmachine: (ha-431000-m02) Calling .Start
	I0819 10:47:34.398040    6731 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:34.398181    6731 main.go:141] libmachine: (ha-431000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:47:34.399890    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 6436 missing from process table
	I0819 10:47:34.399903    6731 main.go:141] libmachine: (ha-431000-m02) DBG | pid 6436 is in state "Stopped"
	I0819 10:47:34.399920    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid...
	I0819 10:47:34.400291    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:47:34.428075    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:47:34.428103    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:47:34.428232    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003af200)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:34.428264    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003af200)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:34.428356    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:47:34.428395    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:47:34.428414    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:47:34.429765    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Pid is 6783
	I0819 10:47:34.430472    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:47:34.430523    6731 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:34.430650    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6783
	I0819 10:47:34.432548    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:47:34.432573    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:47:34.432586    6731 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d6ab}
	I0819 10:47:34.432599    6731 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d62c}
	I0819 10:47:34.432608    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:47:34.432619    6731 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:47:34.432669    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:47:34.433339    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:34.433544    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:34.434121    6731 machine.go:93] provisionDockerMachine start ...
	I0819 10:47:34.434131    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:34.434259    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:34.434360    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:34.434461    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:34.434563    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:34.434665    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:34.434786    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:34.434931    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:34.434939    6731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:47:34.437670    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:47:34.446364    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:47:34.447557    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:34.447574    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:34.447585    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:34.447595    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:34.831206    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:47:34.831223    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:47:34.946012    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:34.946044    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:34.946065    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:34.946082    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:34.946901    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:47:34.946912    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:47:40.531269    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:47:40.531330    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:47:40.531340    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:47:40.556233    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:47:45.507448    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:47:45.507462    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:47:45.507581    6731 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:47:45.507593    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:47:45.507670    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.507776    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.507909    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.507996    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.508101    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.508234    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.508381    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.508389    6731 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:47:45.583754    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:47:45.583774    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.583905    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.584002    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.584099    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.584184    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.584323    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.584482    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.584494    6731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:47:45.658171    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:47:45.658187    6731 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:47:45.658197    6731 buildroot.go:174] setting up certificates
	I0819 10:47:45.658205    6731 provision.go:84] configureAuth start
	I0819 10:47:45.658211    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:47:45.658365    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:45.658474    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.658558    6731 provision.go:143] copyHostCerts
	I0819 10:47:45.658585    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:45.658635    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:47:45.658641    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:45.658762    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:47:45.658966    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:45.658995    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:47:45.658999    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:45.659067    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:47:45.659209    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:45.659236    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:47:45.659241    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:45.659309    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:47:45.659487    6731 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:47:45.772365    6731 provision.go:177] copyRemoteCerts
	I0819 10:47:45.772449    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:47:45.772468    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.772616    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.772719    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.772815    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.772905    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:45.813424    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:47:45.813495    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:47:45.833296    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:47:45.833365    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:47:45.853251    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:47:45.853315    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:47:45.873370    6731 provision.go:87] duration metric: took 215.153593ms to configureAuth
	I0819 10:47:45.873384    6731 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:47:45.873555    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:45.873574    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:45.873707    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.873815    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.873904    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.874006    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.874106    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.874221    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.874350    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.874357    6731 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:47:45.937816    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:47:45.937826    6731 buildroot.go:70] root file system type: tmpfs
	I0819 10:47:45.937934    6731 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:47:45.937947    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.938086    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.938186    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.938276    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.938370    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.938507    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.938641    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.938689    6731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:47:46.014680    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:47:46.014697    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:46.014833    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:46.014924    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:46.015010    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:46.015092    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:46.015215    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:46.015354    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:46.015366    6731 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:47:47.693084    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:47:47.693099    6731 machine.go:96] duration metric: took 13.258686385s to provisionDockerMachine
	I0819 10:47:47.693106    6731 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:47:47.693114    6731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:47:47.693124    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:47.693322    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:47:47.693338    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:47.693428    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:47.693543    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.693661    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:47.693761    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:47.738652    6731 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:47:47.742121    6731 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:47:47.742133    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:47:47.742223    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:47:47.742376    6731 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:47:47.742383    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:47:47.742539    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:47:47.750138    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:47.780304    6731 start.go:296] duration metric: took 87.187547ms for postStartSetup
	I0819 10:47:47.780325    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:47.780489    6731 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:47:47.780503    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:47.780584    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:47.780680    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.780768    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:47.780844    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:47.820828    6731 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:47:47.820883    6731 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:47:47.874212    6731 fix.go:56] duration metric: took 13.557703241s for fixHost
	I0819 10:47:47.874239    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:47.874390    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:47.874493    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.874580    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.874675    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:47.874801    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:47.874942    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:47.874950    6731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:47:47.939805    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089667.971112519
	
	I0819 10:47:47.939818    6731 fix.go:216] guest clock: 1724089667.971112519
	I0819 10:47:47.939826    6731 fix.go:229] Guest: 2024-08-19 10:47:47.971112519 -0700 PDT Remote: 2024-08-19 10:47:47.874228 -0700 PDT m=+34.922052537 (delta=96.884519ms)
	I0819 10:47:47.939836    6731 fix.go:200] guest clock delta is within tolerance: 96.884519ms
	I0819 10:47:47.939839    6731 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.623361057s
	I0819 10:47:47.939855    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:47.939978    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:47.963353    6731 out.go:177] * Found network options:
	I0819 10:47:47.984541    6731 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:47:48.006564    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:47:48.006602    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:48.007422    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:48.007661    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:48.007799    6731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:47:48.007841    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:47:48.007857    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:47:48.007960    6731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:47:48.007982    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:48.008073    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:48.008275    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:48.008303    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:48.008450    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:48.008512    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:48.008705    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:48.008702    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:48.008832    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:47:48.046347    6731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:47:48.046407    6731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:47:48.092373    6731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:47:48.092395    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:48.092498    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:48.108693    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:47:48.117700    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:47:48.126528    6731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:47:48.126570    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:47:48.135370    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:48.144295    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:47:48.153239    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:48.162188    6731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:47:48.171097    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:47:48.180126    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:47:48.188940    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:47:48.197810    6731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:47:48.205812    6731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:47:48.213773    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:48.325175    6731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:47:48.347923    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:48.347991    6731 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:47:48.361302    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:48.374626    6731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:47:48.389101    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:48.399756    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:48.409828    6731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:47:48.432006    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:48.442558    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:48.457632    6731 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:47:48.460581    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:47:48.467778    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:47:48.481436    6731 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:47:48.581769    6731 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:47:48.698298    6731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:47:48.698327    6731 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:47:48.712343    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:48.807611    6731 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:47:51.175487    6731 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367806337s)
	I0819 10:47:51.175551    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:47:51.185809    6731 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:47:51.199305    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:51.209999    6731 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:47:51.305659    6731 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:47:51.404114    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:51.515116    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:47:51.528971    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:51.540018    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:51.642211    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:47:51.708864    6731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:47:51.708942    6731 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:47:51.713456    6731 start.go:563] Will wait 60s for crictl version
	I0819 10:47:51.713510    6731 ssh_runner.go:195] Run: which crictl
	I0819 10:47:51.719286    6731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:47:51.744566    6731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:47:51.744636    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:51.762063    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:51.802673    6731 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:47:51.844258    6731 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:47:51.865266    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:51.865575    6731 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:47:51.869247    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:51.879589    6731 mustload.go:65] Loading cluster: ha-431000
	I0819 10:47:51.879763    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:51.879994    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:51.880010    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:51.889072    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52064
	I0819 10:47:51.889483    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:51.889854    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:51.889872    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:51.890119    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:51.890230    6731 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:47:51.890313    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:51.890398    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:47:51.891393    6731 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:47:51.891646    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:51.891661    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:51.900428    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52066
	I0819 10:47:51.900763    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:51.901079    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:51.901089    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:51.901317    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:51.901415    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:51.901514    6731 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:47:51.901521    6731 certs.go:194] generating shared ca certs ...
	I0819 10:47:51.901534    6731 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:51.901670    6731 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:47:51.901723    6731 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:47:51.901732    6731 certs.go:256] generating profile certs ...
	I0819 10:47:51.901831    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:47:51.901922    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.f69e9b91
	I0819 10:47:51.901978    6731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:47:51.901986    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:47:51.902006    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:47:51.902026    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:47:51.902044    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:47:51.902062    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:47:51.902080    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:47:51.902099    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:47:51.902116    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:47:51.902197    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:47:51.902236    6731 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:47:51.902244    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:47:51.902283    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:47:51.902314    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:47:51.902343    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:47:51.902410    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:51.902441    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:47:51.902461    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:47:51.902483    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:51.902508    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:51.902593    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:51.902677    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:51.902761    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:51.902837    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:51.926599    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:47:51.930274    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:47:51.938012    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:47:51.941060    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:47:51.948752    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:47:51.951705    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:47:51.959653    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:47:51.962721    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:47:51.971351    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:47:51.974362    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:47:51.982204    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:47:51.985240    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:47:51.993894    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:47:52.013902    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:47:52.033528    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:47:52.053096    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:47:52.072504    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 10:47:52.091757    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 10:47:52.110982    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:47:52.130616    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:47:52.150337    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:47:52.170242    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:47:52.189881    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:47:52.209131    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:47:52.222937    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:47:52.236606    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:47:52.250135    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:47:52.263801    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:47:52.277449    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:47:52.290914    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:47:52.304537    6731 ssh_runner.go:195] Run: openssl version
	I0819 10:47:52.308871    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:47:52.317959    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:47:52.321340    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:47:52.321374    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:47:52.325500    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:47:52.334569    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:47:52.343508    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:52.346908    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:52.346954    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:52.351191    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:47:52.360097    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:47:52.369144    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:47:52.372634    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:47:52.372668    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:47:52.377048    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:47:52.385997    6731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:47:52.389485    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 10:47:52.393773    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 10:47:52.398077    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 10:47:52.402284    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 10:47:52.406494    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 10:47:52.410784    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 10:47:52.415017    6731 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:47:52.415077    6731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:47:52.415094    6731 kube-vip.go:115] generating kube-vip config ...
	I0819 10:47:52.415128    6731 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:47:52.428484    6731 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:47:52.428533    6731 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:47:52.428584    6731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:47:52.436426    6731 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:47:52.436471    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:47:52.443594    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:47:52.457212    6731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:47:52.470304    6731 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:47:52.484055    6731 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:47:52.486893    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:52.496372    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:52.591931    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:47:52.607116    6731 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:47:52.607291    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:52.628710    6731 out.go:177] * Verifying Kubernetes components...
	I0819 10:47:52.670346    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:52.783782    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:47:52.798292    6731 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:52.798497    6731 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1139f2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:47:52.798536    6731 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:47:52.798707    6731 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:47:52.798781    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:47:52.798786    6731 round_trippers.go:469] Request Headers:
	I0819 10:47:52.798795    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:47:52.798799    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.294663    6731 round_trippers.go:574] Response Status: 200 OK in 8495 milliseconds
	I0819 10:48:01.295619    6731 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:48:01.295631    6731 node_ready.go:38] duration metric: took 8.496725269s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:48:01.295639    6731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:48:01.295675    6731 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:48:01.295684    6731 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:48:01.295719    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:01.295725    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.295731    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.295738    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.330440    6731 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0819 10:48:01.337354    6731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.337421    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:48:01.337427    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.337433    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.337437    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.341316    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.341771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.341778    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.341784    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.341787    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.348506    6731 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:48:01.348939    6731 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.348948    6731 pod_ready.go:82] duration metric: took 11.576417ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.348955    6731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.349002    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:48:01.349009    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.349018    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.349023    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.352838    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.353315    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.353323    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.353329    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.353332    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.359196    6731 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 10:48:01.359534    6731 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.359544    6731 pod_ready.go:82] duration metric: took 10.583164ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.359550    6731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.359593    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:48:01.359598    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.359606    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.359612    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.362788    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.363225    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.363232    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.363240    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.363244    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.367689    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:01.368075    6731 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.368086    6731 pod_ready.go:82] duration metric: took 8.530882ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.368092    6731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.368143    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:48:01.368148    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.368154    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.368159    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.371432    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.372034    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:01.372042    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.372047    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.372051    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.374444    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.374736    6731 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.374746    6731 pod_ready.go:82] duration metric: took 6.6473ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.374762    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.374802    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:48:01.374806    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.374812    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.374816    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.377666    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.497544    6731 request.go:632] Waited for 119.461544ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.497628    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.497639    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.497644    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.497657    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.500903    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.501455    6731 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.501465    6731 pod_ready.go:82] duration metric: took 126.694729ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.501472    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.696523    6731 request.go:632] Waited for 195.000548ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:48:01.696576    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:48:01.696581    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.696587    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.696591    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.699558    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.896265    6731 request.go:632] Waited for 196.197674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:01.896299    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:01.896306    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.896314    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.896318    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.898585    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.899021    6731 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.899030    6731 pod_ready.go:82] duration metric: took 397.544864ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.899037    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.096355    6731 request.go:632] Waited for 197.256376ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:48:02.096461    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:48:02.096473    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.096484    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.096492    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.100048    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:02.295872    6731 request.go:632] Waited for 195.092018ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:02.295923    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:02.295929    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.295935    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.295938    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.297901    6731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:48:02.298170    6731 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:02.298180    6731 pod_ready.go:82] duration metric: took 399.12914ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.298196    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.496479    6731 request.go:632] Waited for 198.200207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:48:02.496532    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:48:02.496579    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.496595    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.496601    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.500536    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:02.695959    6731 request.go:632] Waited for 194.694484ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:02.696038    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:02.696044    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.696053    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.696059    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.698693    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:02.699259    6731 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:02.699268    6731 pod_ready.go:82] duration metric: took 401.059351ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.699282    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2fn5w" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.895886    6731 request.go:632] Waited for 196.554773ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2fn5w
	I0819 10:48:02.895937    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2fn5w
	I0819 10:48:02.895943    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.895949    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.895952    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.898485    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:03.097015    6731 request.go:632] Waited for 197.927938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m04
	I0819 10:48:03.097110    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m04
	I0819 10:48:03.097121    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.097133    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.097139    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.100422    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:03.100848    6731 pod_ready.go:93] pod "kube-proxy-2fn5w" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:03.100861    6731 pod_ready.go:82] duration metric: took 401.564872ms for pod "kube-proxy-2fn5w" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.100870    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.297507    6731 request.go:632] Waited for 196.572896ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:48:03.297595    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:48:03.297605    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.297617    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.297628    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.300868    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:03.497170    6731 request.go:632] Waited for 195.491118ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:03.497222    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:03.497231    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.497243    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.497254    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.500591    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:03.501004    6731 pod_ready.go:98] node "ha-431000-m02" hosting pod "kube-proxy-5h7j2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:03.501017    6731 pod_ready.go:82] duration metric: took 400.132303ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	E0819 10:48:03.501025    6731 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-431000-m02" hosting pod "kube-proxy-5h7j2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:03.501032    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.696124    6731 request.go:632] Waited for 195.010851ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:48:03.696172    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:48:03.696179    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.696218    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.696226    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.699032    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:03.895964    6731 request.go:632] Waited for 196.576431ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:03.896021    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:03.896029    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.896037    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.896043    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.898534    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:03.898926    6731 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:03.898935    6731 pod_ready.go:82] duration metric: took 397.887553ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.898942    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:04.096184    6731 request.go:632] Waited for 197.190491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:48:04.096246    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:48:04.096256    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.096269    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.096277    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.099213    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:04.297318    6731 request.go:632] Waited for 197.526248ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:04.297394    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:04.297404    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.297415    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.297424    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.301350    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:04.301819    6731 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:04.301828    6731 pod_ready.go:82] duration metric: took 402.870121ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:04.301835    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:04.495992    6731 request.go:632] Waited for 194.108051ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:48:04.496068    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:48:04.496077    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.496087    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.496094    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.499407    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:04.696474    6731 request.go:632] Waited for 196.428196ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:04.696569    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:04.696581    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.696595    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.696602    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.699405    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:04.699912    6731 pod_ready.go:98] node "ha-431000-m02" hosting pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:04.699926    6731 pod_ready.go:82] duration metric: took 398.076795ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	E0819 10:48:04.699934    6731 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-431000-m02" hosting pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:04.699945    6731 pod_ready.go:39] duration metric: took 3.404223088s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:48:04.699963    6731 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:48:04.700028    6731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:48:04.711937    6731 api_server.go:72] duration metric: took 12.104535169s to wait for apiserver process to appear ...
	I0819 10:48:04.711948    6731 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:48:04.711964    6731 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:48:04.714976    6731 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:48:04.715016    6731 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:48:04.715022    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.715028    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.715032    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.715515    6731 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:48:04.715659    6731 api_server.go:141] control plane version: v1.31.0
	I0819 10:48:04.715671    6731 api_server.go:131] duration metric: took 3.718718ms to wait for apiserver health ...
	I0819 10:48:04.715676    6731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:48:04.896062    6731 request.go:632] Waited for 180.330037ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:04.896138    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:04.896149    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.896159    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.896167    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.900885    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:04.904876    6731 system_pods.go:59] 19 kube-system pods found
	I0819 10:48:04.904891    6731 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:48:04.904896    6731 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:48:04.904899    6731 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:48:04.904902    6731 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:48:04.904905    6731 system_pods.go:61] "kindnet-kcrzx" [4d8e74ea-456c-476b-951f-c880eb642788] Running
	I0819 10:48:04.904908    6731 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:48:04.904911    6731 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:48:04.904914    6731 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:48:04.904916    6731 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:48:04.904919    6731 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:48:04.904922    6731 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:48:04.904925    6731 system_pods.go:61] "kube-proxy-2fn5w" [bca1b722-fe85-4f4b-a536-8228357812a4] Running
	I0819 10:48:04.904927    6731 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:48:04.904930    6731 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:48:04.904933    6731 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:48:04.904935    6731 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:48:04.904938    6731 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:48:04.904940    6731 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:48:04.904955    6731 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:48:04.904964    6731 system_pods.go:74] duration metric: took 189.278663ms to wait for pod list to return data ...
	I0819 10:48:04.904971    6731 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:48:05.096767    6731 request.go:632] Waited for 191.735215ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:48:05.096807    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:48:05.096813    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:05.096824    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:05.096848    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:05.099644    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:05.099783    6731 default_sa.go:45] found service account: "default"
	I0819 10:48:05.099793    6731 default_sa.go:55] duration metric: took 194.813501ms for default service account to be created ...
	I0819 10:48:05.099798    6731 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:48:05.296235    6731 request.go:632] Waited for 196.389305ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:05.296338    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:05.296351    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:05.296362    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:05.296370    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:05.300491    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:05.304610    6731 system_pods.go:86] 19 kube-system pods found
	I0819 10:48:05.304622    6731 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:48:05.304626    6731 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:48:05.304629    6731 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:48:05.304631    6731 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:48:05.304634    6731 system_pods.go:89] "kindnet-kcrzx" [4d8e74ea-456c-476b-951f-c880eb642788] Running
	I0819 10:48:05.304636    6731 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:48:05.304639    6731 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:48:05.304641    6731 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:48:05.304644    6731 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:48:05.304646    6731 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:48:05.304652    6731 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:48:05.304655    6731 system_pods.go:89] "kube-proxy-2fn5w" [bca1b722-fe85-4f4b-a536-8228357812a4] Running
	I0819 10:48:05.304658    6731 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:48:05.304660    6731 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:48:05.304663    6731 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:48:05.304666    6731 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:48:05.304670    6731 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:48:05.304673    6731 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:48:05.304675    6731 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:48:05.304679    6731 system_pods.go:126] duration metric: took 204.873114ms to wait for k8s-apps to be running ...
	I0819 10:48:05.304689    6731 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:48:05.304743    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:48:05.315748    6731 system_svc.go:56] duration metric: took 11.056169ms WaitForService to wait for kubelet
	I0819 10:48:05.315761    6731 kubeadm.go:582] duration metric: took 12.708349079s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:48:05.315777    6731 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:48:05.496283    6731 request.go:632] Waited for 180.435074ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:48:05.496409    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:48:05.496422    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:05.496434    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:05.496442    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:05.500479    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:05.501183    6731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:48:05.501199    6731 node_conditions.go:123] node cpu capacity is 2
	I0819 10:48:05.501209    6731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:48:05.501213    6731 node_conditions.go:123] node cpu capacity is 2
	I0819 10:48:05.501217    6731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:48:05.501220    6731 node_conditions.go:123] node cpu capacity is 2
	I0819 10:48:05.501224    6731 node_conditions.go:105] duration metric: took 185.438997ms to run NodePressure ...
	I0819 10:48:05.501232    6731 start.go:241] waiting for startup goroutines ...
	I0819 10:48:05.501250    6731 start.go:255] writing updated cluster config ...
	I0819 10:48:05.523466    6731 out.go:201] 
	I0819 10:48:05.560623    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:05.560698    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:48:05.598433    6731 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:48:05.673302    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:48:05.673330    6731 cache.go:56] Caching tarball of preloaded images
	I0819 10:48:05.673481    6731 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:48:05.673495    6731 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:48:05.673583    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:48:05.674126    6731 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:48:05.674196    6731 start.go:364] duration metric: took 53.173µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:48:05.674214    6731 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:48:05.674220    6731 fix.go:54] fixHost starting: m03
	I0819 10:48:05.674532    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:48:05.674564    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:48:05.684031    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52071
	I0819 10:48:05.684387    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:48:05.684730    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:48:05.684748    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:48:05.684970    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:48:05.685096    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:05.685184    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:48:05.685314    6731 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:05.685417    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:48:05.686356    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid 4921 missing from process table
	I0819 10:48:05.686393    6731 fix.go:112] recreateIfNeeded on ha-431000-m03: state=Stopped err=<nil>
	I0819 10:48:05.686403    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	W0819 10:48:05.686488    6731 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:48:05.707556    6731 out.go:177] * Restarting existing hyperkit VM for "ha-431000-m03" ...
	I0819 10:48:05.749205    6731 main.go:141] libmachine: (ha-431000-m03) Calling .Start
	I0819 10:48:05.749457    6731 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:05.749508    6731 main.go:141] libmachine: (ha-431000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:48:05.750891    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid 4921 missing from process table
	I0819 10:48:05.750907    6731 main.go:141] libmachine: (ha-431000-m03) DBG | pid 4921 is in state "Stopped"
	I0819 10:48:05.750937    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid...
	I0819 10:48:05.751980    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:48:05.783917    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:48:05.783944    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:48:05.784089    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00039adb0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:48:05.784126    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00039adb0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:48:05.784162    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:48:05.784200    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:48:05.784218    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:48:05.786149    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Pid is 6801
	I0819 10:48:05.786682    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:48:05.786725    6731 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:05.786782    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 6801
	I0819 10:48:05.789082    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:48:05.789187    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:48:05.789247    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d6bf}
	I0819 10:48:05.789282    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d6ab}
	I0819 10:48:05.789327    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:48:05.789331    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 10:48:05.789394    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:48:05.789432    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:48:05.789457    6731 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:48:05.790573    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:05.790831    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:48:05.791509    6731 machine.go:93] provisionDockerMachine start ...
	I0819 10:48:05.791526    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:05.791708    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:05.791856    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:05.791989    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:05.792106    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:05.792233    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:05.792391    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:05.792718    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:05.792736    6731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:48:05.795522    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:48:05.805645    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:48:05.807213    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:48:05.807239    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:48:05.807263    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:48:05.807280    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:48:06.196775    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:48:06.196792    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:48:06.311674    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:48:06.311699    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:48:06.311708    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:48:06.311716    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:48:06.312485    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:48:06.312497    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:48:11.891105    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:48:11.891118    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:48:11.891126    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:48:11.914412    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:48:40.850746    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:48:40.850774    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:48:40.850923    6731 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:48:40.850935    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:48:40.851109    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:40.851215    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:40.851319    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.851447    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.851565    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:40.851724    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:40.851884    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:40.851892    6731 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:48:40.912350    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:48:40.912364    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:40.912505    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:40.912602    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.912691    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.912785    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:40.912908    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:40.913053    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:40.913064    6731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:48:40.968529    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:48:40.968544    6731 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:48:40.968564    6731 buildroot.go:174] setting up certificates
	I0819 10:48:40.968572    6731 provision.go:84] configureAuth start
	I0819 10:48:40.968583    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:48:40.968727    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:40.968824    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:40.968927    6731 provision.go:143] copyHostCerts
	I0819 10:48:40.968955    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:48:40.969005    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:48:40.969014    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:48:40.969148    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:48:40.969352    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:48:40.969382    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:48:40.969386    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:48:40.969454    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:48:40.969597    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:48:40.969626    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:48:40.969631    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:48:40.969728    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:48:40.969875    6731 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:48:41.057829    6731 provision.go:177] copyRemoteCerts
	I0819 10:48:41.057874    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:48:41.057888    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.058026    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.058130    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.058224    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.058305    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:41.091148    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:48:41.091220    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:48:41.111177    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:48:41.111249    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:48:41.131169    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:48:41.131232    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:48:41.150507    6731 provision.go:87] duration metric: took 181.923979ms to configureAuth
	I0819 10:48:41.150522    6731 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:48:41.150698    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:41.150712    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:41.150863    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.150946    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.151038    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.151126    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.151222    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.151342    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:41.151471    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:41.151478    6731 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:48:41.202400    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:48:41.202413    6731 buildroot.go:70] root file system type: tmpfs
	I0819 10:48:41.202505    6731 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:48:41.202518    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.202705    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.202819    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.202905    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.202997    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.203153    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:41.203294    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:41.203341    6731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:48:41.264039    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:48:41.264057    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.264193    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.264267    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.264354    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.264447    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.264565    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:41.264712    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:41.264724    6731 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:48:42.813749    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:48:42.813763    6731 machine.go:96] duration metric: took 37.021449642s to provisionDockerMachine
	I0819 10:48:42.813771    6731 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:48:42.813778    6731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:48:42.813796    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:42.813978    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:48:42.813990    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:42.814079    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:42.814168    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.814251    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:42.814339    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:42.847285    6731 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:48:42.850702    6731 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:48:42.850716    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:48:42.850802    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:48:42.850961    6731 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:48:42.850968    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:48:42.851143    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:48:42.859533    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:48:42.879757    6731 start.go:296] duration metric: took 65.975651ms for postStartSetup
	I0819 10:48:42.879780    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:42.879958    6731 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:48:42.879970    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:42.880059    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:42.880147    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.880225    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:42.880299    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:42.912892    6731 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:48:42.912952    6731 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:48:42.966028    6731 fix.go:56] duration metric: took 37.291003007s for fixHost
	I0819 10:48:42.966067    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:42.966300    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:42.966470    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.966677    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.966842    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:42.967014    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:42.967198    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:42.967209    6731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:48:43.017214    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089722.809914885
	
	I0819 10:48:43.017227    6731 fix.go:216] guest clock: 1724089722.809914885
	I0819 10:48:43.017238    6731 fix.go:229] Guest: 2024-08-19 10:48:42.809914885 -0700 PDT Remote: 2024-08-19 10:48:42.966051 -0700 PDT m=+90.012694037 (delta=-156.136115ms)
	I0819 10:48:43.017249    6731 fix.go:200] guest clock delta is within tolerance: -156.136115ms
	I0819 10:48:43.017253    6731 start.go:83] releasing machines lock for "ha-431000-m03", held for 37.342247723s
	I0819 10:48:43.017267    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.017412    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:43.053981    6731 out.go:177] * Found network options:
	I0819 10:48:43.129066    6731 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:48:43.183072    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:48:43.183105    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:48:43.183124    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.183855    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.184015    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.184100    6731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:48:43.184137    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:48:43.184239    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:48:43.184256    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:48:43.184293    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:43.184321    6731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:48:43.184333    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:43.184497    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:43.184513    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:43.184663    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:43.184689    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:43.184810    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:43.184822    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:43.184959    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:48:43.213583    6731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:48:43.213642    6731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:48:43.260969    6731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:48:43.260991    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:48:43.261093    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:48:43.276683    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:48:43.284995    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:48:43.293374    6731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:48:43.293418    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:48:43.301652    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:48:43.309897    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:48:43.318705    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:48:43.326972    6731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:48:43.335390    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:48:43.343887    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:48:43.352357    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:48:43.360984    6731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:48:43.368494    6731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:48:43.376120    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:43.467265    6731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:48:43.484775    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:48:43.484846    6731 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:48:43.497091    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:48:43.508193    6731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:48:43.523755    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:48:43.534687    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:48:43.544926    6731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:48:43.565401    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:48:43.578088    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:48:43.593104    6731 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:48:43.595950    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:48:43.603348    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:48:43.617225    6731 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:48:43.708564    6731 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:48:43.826974    6731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:48:43.827000    6731 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:48:43.840921    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:43.931831    6731 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:48:46.156257    6731 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.224358944s)
	I0819 10:48:46.156321    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:48:46.167537    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:48:46.177508    6731 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:48:46.275371    6731 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:48:46.384348    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:46.481007    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:48:46.494577    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:48:46.505747    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:46.597531    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:48:46.653351    6731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:48:46.653427    6731 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:48:46.657670    6731 start.go:563] Will wait 60s for crictl version
	I0819 10:48:46.657717    6731 ssh_runner.go:195] Run: which crictl
	I0819 10:48:46.660938    6731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:48:46.686761    6731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:48:46.686832    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:48:46.704526    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:48:46.743134    6731 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:48:46.784818    6731 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:48:46.805951    6731 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0819 10:48:46.827168    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:46.827576    6731 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:48:46.832299    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:48:46.842314    6731 mustload.go:65] Loading cluster: ha-431000
	I0819 10:48:46.842487    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:46.842703    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:48:46.842725    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:48:46.851523    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52093
	I0819 10:48:46.851853    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:48:46.852189    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:48:46.852199    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:48:46.852392    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:48:46.852498    6731 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:48:46.852572    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:46.852653    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:48:46.853627    6731 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:48:46.853864    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:48:46.853886    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:48:46.862538    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52095
	I0819 10:48:46.862891    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:48:46.863218    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:48:46.863228    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:48:46.863493    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:48:46.863609    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:48:46.863718    6731 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.7
	I0819 10:48:46.863725    6731 certs.go:194] generating shared ca certs ...
	I0819 10:48:46.863739    6731 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:46.863891    6731 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:48:46.863952    6731 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:48:46.863961    6731 certs.go:256] generating profile certs ...
	I0819 10:48:46.864059    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:48:46.864084    6731 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc
	I0819 10:48:46.864099    6731 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0819 10:48:47.115702    6731 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc ...
	I0819 10:48:47.115728    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc: {Name:mk546bf47d8f9536a5f5b6d4554be985cbd51530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:47.116053    6731 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc ...
	I0819 10:48:47.116065    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc: {Name:mk7e6a2c85fe835844cf7f3435ab2787264953bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:47.116272    6731 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:48:47.116477    6731 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:48:47.116689    6731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:48:47.116699    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:48:47.116720    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:48:47.116739    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:48:47.116757    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:48:47.116776    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:48:47.116795    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:48:47.116812    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:48:47.116829    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:48:47.116905    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:48:47.116938    6731 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:48:47.116947    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:48:47.116979    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:48:47.117007    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:48:47.117035    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:48:47.117102    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:48:47.117135    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.117157    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.117176    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.117208    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:48:47.117346    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:48:47.117436    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:48:47.117536    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:48:47.117615    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:48:47.142966    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:48:47.147073    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:48:47.155318    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:48:47.158461    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:48:47.166659    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:48:47.169909    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:48:47.178109    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:48:47.181265    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:48:47.189483    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:48:47.192613    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:48:47.201555    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:48:47.205119    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:48:47.213152    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:48:47.233357    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:48:47.253373    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:48:47.273621    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:48:47.293620    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 10:48:47.313508    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 10:48:47.333626    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:48:47.353462    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:48:47.373370    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:48:47.393215    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:48:47.412732    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:48:47.432601    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:48:47.446319    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:48:47.460225    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:48:47.473780    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:48:47.487357    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:48:47.501097    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:48:47.514700    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:48:47.528522    6731 ssh_runner.go:195] Run: openssl version
	I0819 10:48:47.532949    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:48:47.541688    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.545076    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.545117    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.549433    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:48:47.558033    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:48:47.566686    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.570522    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.570574    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.574909    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:48:47.583535    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:48:47.592184    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.595867    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.595904    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.600346    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:48:47.609333    6731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:48:47.612588    6731 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:48:47.612626    6731 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0819 10:48:47.612672    6731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:48:47.612693    6731 kube-vip.go:115] generating kube-vip config ...
	I0819 10:48:47.612723    6731 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:48:47.627870    6731 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:48:47.627924    6731 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:48:47.627976    6731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:48:47.636973    6731 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:48:47.637024    6731 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:48:47.646020    6731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 10:48:47.646020    6731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 10:48:47.646020    6731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 10:48:47.646038    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:48:47.646059    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:48:47.646062    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:48:47.646121    6731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:48:47.646172    6731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:48:47.660116    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:48:47.660157    6731 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:48:47.660183    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:48:47.660208    6731 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:48:47.660226    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:48:47.660248    6731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:48:47.673769    6731 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:48:47.673805    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:48:48.141691    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:48:48.149459    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:48:48.162963    6731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:48:48.176379    6731 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:48:48.189896    6731 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:48:48.192847    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:48:48.202768    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:48.297576    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:48:48.315324    6731 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:48:48.315508    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:48.336018    6731 out.go:177] * Verifying Kubernetes components...
	I0819 10:48:48.356514    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:48.452232    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:48:49.049566    6731 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:48:49.049773    6731 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1139f2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:48:49.049811    6731 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:48:49.049986    6731 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m03" to be "Ready" ...
	I0819 10:48:49.050026    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:49.050031    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:49.050044    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:49.050049    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:49.052182    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:49.550380    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:49.550401    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:49.550412    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:49.550420    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:49.553469    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:50.050836    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:50.050856    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:50.050867    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:50.050872    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:50.053828    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:50.551275    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:50.551290    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:50.551297    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:50.551299    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:50.553247    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:48:51.051126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:51.051149    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:51.051161    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:51.051169    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:51.054487    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:51.054565    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:51.550751    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:51.550764    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:51.550770    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:51.550773    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:51.554094    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:52.051808    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:52.051848    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:52.051857    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:52.051864    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:52.054405    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:52.551111    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:52.551135    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:52.551147    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:52.551153    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:52.554177    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:53.050562    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:53.050577    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:53.050584    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:53.050587    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:53.052361    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:48:53.550771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:53.550787    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:53.550794    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:53.550798    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:53.553283    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:53.553380    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:54.051356    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:54.051428    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:54.051441    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:54.051447    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:54.054348    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:54.551004    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:54.551020    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:54.551026    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:54.551030    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:54.553045    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:55.051095    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:55.051142    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:55.051152    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:55.051157    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:55.053428    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:55.550441    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:55.550460    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:55.550470    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:55.550475    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:55.553606    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:55.553707    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:56.050952    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:56.050966    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:56.050973    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:56.050976    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:56.052832    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:48:56.551392    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:56.551413    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:56.551441    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:56.551446    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:56.553734    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:57.051356    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:57.051377    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:57.051388    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:57.051396    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:57.054556    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:57.551010    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:57.551030    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:57.551041    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:57.551047    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:57.553839    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:57.553945    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:58.050877    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:58.050892    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:58.050900    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:58.050903    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:58.053207    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:58.551669    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:58.551688    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:58.551699    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:58.551707    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:58.554730    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:59.050796    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:59.050819    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:59.050830    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:59.050835    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:59.054088    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:59.550718    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:59.550737    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:59.550749    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:59.550756    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:59.553970    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:59.554048    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:00.052097    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:00.052120    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:00.052167    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:00.052198    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:00.055063    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:00.550744    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:00.550766    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:00.550776    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:00.550782    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:00.553834    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:01.051854    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:01.051873    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:01.051885    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:01.051892    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:01.055031    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:01.551302    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:01.551323    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:01.551335    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:01.551343    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:01.554596    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:01.554668    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:02.050920    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:02.050940    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:02.050958    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:02.050975    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:02.053736    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:02.552196    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:02.552230    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:02.552237    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:02.552240    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:02.554641    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:03.050838    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:03.050857    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:03.050868    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:03.050873    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:03.054125    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:03.550771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:03.550785    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:03.550794    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:03.550798    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:03.552910    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:04.052575    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:04.052595    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:04.052607    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:04.052621    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:04.055636    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:04.055705    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:04.552223    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:04.552242    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:04.552253    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:04.552259    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:04.555524    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:05.052550    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:05.052574    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:05.052588    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:05.052610    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:05.054909    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:05.552550    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:05.552568    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:05.552577    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:05.552581    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:05.556192    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:06.051290    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:06.051305    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:06.051311    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:06.051315    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:06.052929    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:06.550946    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:06.550969    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:06.550981    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:06.550989    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:06.565463    6731 round_trippers.go:574] Response Status: 404 Not Found in 14 milliseconds
	I0819 10:49:06.565539    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:07.051724    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:07.051792    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:07.051806    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:07.051822    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:07.054638    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:07.552559    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:07.552575    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:07.552583    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:07.552587    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:07.558906    6731 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0819 10:49:08.051983    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:08.052011    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:08.052048    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:08.052057    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:08.055151    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:08.550667    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:08.550693    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:08.550735    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:08.550750    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:08.553804    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:09.052706    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:09.052731    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:09.052776    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:09.052784    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:09.055712    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:09.055781    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:09.551599    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:09.551615    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:09.551624    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:09.551630    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:09.555183    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:10.050631    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:10.050657    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:10.050669    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:10.050674    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:10.054985    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:49:10.551126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:10.551137    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:10.551143    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:10.551146    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:10.553249    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:11.052626    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:11.052644    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:11.052651    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:11.052656    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:11.055384    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:11.550711    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:11.550725    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:11.550729    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:11.550733    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:11.554398    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:11.554509    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:12.051859    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:12.051884    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:12.051924    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:12.051934    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:12.055082    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:12.551161    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:12.551173    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:12.551179    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:12.551183    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:12.553279    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:13.051549    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:13.051610    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:13.051621    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:13.051628    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:13.054867    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:13.551864    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:13.551878    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:13.551884    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:13.551889    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:13.555066    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:13.555140    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:14.052199    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:14.052217    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:14.052223    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:14.052226    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:14.054562    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:14.551764    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:14.551790    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:14.551801    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:14.551807    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:14.555310    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:15.052223    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:15.052279    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:15.052293    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:15.052299    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:15.055796    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:15.550718    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:15.550733    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:15.550759    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:15.550766    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:15.554217    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:16.052643    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:16.052670    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:16.052716    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:16.052724    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:16.056008    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:16.056083    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:16.551933    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:16.551956    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:16.551968    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:16.551974    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:16.555280    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:17.051987    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:17.052008    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:17.052018    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:17.052025    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:17.055318    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:17.551734    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:17.551746    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:17.551751    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:17.551754    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:17.553654    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:18.050867    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:18.050886    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:18.050899    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:18.050904    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:18.053425    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:18.551523    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:18.551543    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:18.551551    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:18.551557    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:18.554279    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:18.554345    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:19.051204    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:19.051234    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:19.051246    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:19.051252    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:19.054668    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:19.552430    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:19.552449    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:19.552455    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:19.552460    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:19.554479    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:20.050892    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:20.050918    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:20.050930    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:20.050943    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:20.054172    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:20.552143    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:20.552182    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:20.552192    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:20.552198    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:20.554611    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:20.554681    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:21.051321    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:21.051347    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:21.051390    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:21.051401    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:21.054431    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:21.552828    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:21.552891    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:21.552901    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:21.552906    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:21.555366    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:22.051105    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:22.051128    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:22.051140    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:22.051146    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:22.054457    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:22.551053    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:22.551070    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:22.551078    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:22.551081    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:22.553091    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:23.051049    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:23.051073    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:23.051085    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:23.051092    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:23.054116    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:23.054269    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:23.551400    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:23.551419    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:23.551427    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:23.551429    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:23.556948    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:49:24.051531    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:24.051549    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:24.051561    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:24.051569    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:24.054942    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:24.551524    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:24.551548    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:24.551559    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:24.551565    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:24.554301    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:25.050993    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:25.051013    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:25.051022    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:25.051026    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:25.053462    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:25.551254    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:25.551269    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:25.551277    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:25.551283    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:25.553516    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:25.553584    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:26.051047    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:26.051070    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:26.051081    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:26.051095    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:26.053722    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:26.552294    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:26.552315    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:26.552326    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:26.552333    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:26.555323    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:27.051500    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:27.051522    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:27.051570    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:27.051580    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:27.054761    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:27.552023    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:27.552067    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:27.552074    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:27.552076    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:27.554045    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:27.554105    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:28.051012    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:28.051068    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:28.051080    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:28.051091    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:28.053095    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:28.553091    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:28.553112    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:28.553123    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:28.553130    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:28.556091    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:29.051557    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:29.051582    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:29.051593    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:29.051606    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:29.055042    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:29.551292    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:29.551307    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:29.551313    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:29.551315    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:29.553314    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:30.051884    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:30.051917    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:30.051955    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:30.051962    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:30.055200    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:30.055279    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:30.551827    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:30.551854    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:30.551865    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:30.551873    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:30.555019    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:31.051813    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:31.051841    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:31.051852    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:31.051859    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:31.054944    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:31.551163    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:31.551184    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:31.551194    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:31.551200    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:31.553888    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:32.051783    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:32.051819    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:32.051832    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:32.051840    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:32.054547    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:32.552296    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:32.552350    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:32.552364    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:32.552371    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:32.555225    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:32.555300    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:33.052924    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:33.052939    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:33.052947    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:33.052952    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:33.054987    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:33.551522    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:33.551541    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:33.551549    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:33.551553    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:33.554655    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:34.052385    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:34.052434    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:34.052446    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:34.052454    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:34.055087    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:34.551264    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:34.551281    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:34.551289    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:34.551294    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:34.553737    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:35.051346    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:35.051367    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:35.051378    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:35.051386    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:35.054339    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:35.054443    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:35.552208    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:35.552226    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:35.552233    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:35.552237    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:35.554511    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:36.051189    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:36.051204    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:36.051212    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:36.051216    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:36.053190    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:36.553334    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:36.553356    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:36.553368    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:36.553374    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:36.556524    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:37.052539    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:37.052561    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:37.052573    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:37.052580    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:37.055836    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:37.055914    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:37.553023    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:37.553043    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:37.553053    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:37.553059    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:37.556810    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:38.051735    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:38.051757    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:38.051774    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:38.051782    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:38.055061    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:38.552449    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:38.552476    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:38.552487    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:38.552492    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:38.555685    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:39.051387    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:39.051409    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:39.051420    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:39.051425    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:39.054522    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:39.552260    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:39.552285    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:39.552298    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:39.552304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:39.555403    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:39.555495    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:40.051243    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:40.051310    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:40.051324    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:40.051331    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:40.054070    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:40.551873    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:40.551898    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:40.551960    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:40.551969    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:40.554968    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:41.051578    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:41.051606    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:41.051618    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:41.051623    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:41.054807    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:41.551916    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:41.551931    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:41.551943    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:41.551947    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:41.554367    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:42.053217    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:42.053241    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:42.053249    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:42.053255    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:42.056808    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:42.056893    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:42.552774    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:42.552803    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:42.552822    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:42.552882    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:42.556248    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:43.051301    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:43.051316    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:43.051322    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:43.051328    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:43.054036    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:43.553401    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:43.553423    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:43.553434    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:43.553471    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:43.557035    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:44.053457    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:44.053478    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:44.053489    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:44.053496    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:44.056841    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:44.551566    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:44.551590    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:44.551603    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:44.551609    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:44.555416    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:44.555493    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:45.051853    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:45.051879    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:45.051888    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:45.051895    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:45.055040    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:45.553444    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:45.553468    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:45.553515    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:45.553526    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:45.556794    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:46.051786    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:46.051806    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:46.051814    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:46.051832    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:46.053901    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:46.552785    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:46.552817    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:46.552830    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:46.552836    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:46.556083    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:46.556162    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:47.053456    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:47.053482    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:47.053494    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:47.053502    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:47.057009    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:47.553130    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:47.553152    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:47.553164    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:47.553174    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:47.559073    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:49:48.053108    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:48.053134    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:48.053145    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:48.053152    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:48.057067    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:48.552706    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:48.552729    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:48.552739    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:48.552747    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:48.556474    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:48.556559    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:49.051602    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:49.051625    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:49.051637    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:49.051646    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:49.054881    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:49.552627    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:49.552655    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:49.552667    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:49.552674    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:49.556037    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:50.052601    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:50.052618    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:50.052626    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:50.052631    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:50.055469    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:50.552155    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:50.552178    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:50.552190    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:50.552195    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:50.555596    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:51.052878    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:51.052905    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:51.052917    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:51.052922    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:51.056451    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:51.056532    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:51.552110    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:51.552139    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:51.552185    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:51.552195    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:51.555342    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:52.051920    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:52.051944    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:52.051961    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:52.051973    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:52.055723    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:52.551716    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:52.551743    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:52.551753    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:52.551790    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:52.554933    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:53.051908    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:53.051920    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:53.051926    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:53.051930    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:53.053756    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:53.552282    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:53.552329    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:53.552340    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:53.552346    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:53.554573    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:53.554662    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:54.052641    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:54.052700    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:54.052714    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:54.052724    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:54.055914    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:54.553424    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:54.553444    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:54.553453    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:54.553461    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:54.556331    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:55.052118    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:55.052139    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:55.052150    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:55.052156    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:55.055406    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:55.552115    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:55.552140    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:55.552153    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:55.552159    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:55.555054    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:55.555134    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:56.053229    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:56.053253    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:56.053266    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:56.053274    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:56.056807    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:56.552807    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:56.552829    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:56.552841    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:56.552851    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:56.556291    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:57.052874    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:57.052896    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:57.052908    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:57.052913    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:57.056108    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:57.553670    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:57.553697    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:57.553745    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:57.553758    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:57.557263    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:57.557331    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:58.051791    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:58.051817    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:58.051828    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:58.051833    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:58.055250    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:58.552518    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:58.552545    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:58.552556    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:58.552562    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:58.555625    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:59.053863    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:59.053885    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:59.053905    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:59.053914    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:59.057121    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:59.553259    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:59.553272    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:59.553278    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:59.553280    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:59.555213    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:00.052041    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:00.052090    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:00.052103    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:00.052110    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:00.054860    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:00.054945    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:00.552587    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:00.552608    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:00.552620    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:00.552626    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:00.555838    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:01.052694    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:01.052721    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:01.052732    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:01.052746    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:01.056070    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:01.553816    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:01.553839    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:01.553855    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:01.553865    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:01.557015    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:02.051783    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:02.051804    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:02.051815    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:02.051821    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:02.055085    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:02.055158    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:02.553062    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:02.553085    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:02.553097    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:02.553105    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:02.556329    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:03.052789    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:03.052811    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:03.052822    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:03.052827    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:03.055899    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:03.553258    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:03.553318    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:03.553331    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:03.553342    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:03.556755    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:04.052379    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:04.052401    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:04.052413    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:04.052420    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:04.056086    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:04.056163    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:04.552058    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:04.552079    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:04.552090    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:04.552097    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:04.554885    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:05.052906    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:05.052929    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:05.052942    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:05.052950    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:05.056201    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:05.551940    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:05.551961    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:05.551987    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:05.552004    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:05.554036    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:06.052760    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:06.052792    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:06.052801    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:06.052805    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:06.055319    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:06.551983    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:06.552008    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:06.552043    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:06.552063    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:06.554797    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:06.554875    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:07.052461    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:07.052481    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:07.052493    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:07.052501    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:07.055206    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:07.553476    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:07.553503    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:07.553555    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:07.553574    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:07.556741    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:08.052214    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:08.052241    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:08.052252    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:08.052258    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:08.055720    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:08.552079    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:08.552098    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:08.552110    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:08.552119    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:08.554790    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:09.054011    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:09.054033    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:09.054043    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:09.054051    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:09.057425    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:09.057563    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:09.553004    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:09.553024    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:09.553034    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:09.553042    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:09.556104    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:10.052832    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:10.052860    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:10.052870    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:10.052878    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:10.060001    6731 round_trippers.go:574] Response Status: 404 Not Found in 7 milliseconds
	I0819 10:50:10.553943    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:10.553967    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:10.553979    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:10.553984    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:10.557026    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:11.052217    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:11.052240    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:11.052251    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:11.052259    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:11.055611    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:11.553180    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:11.553218    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:11.553231    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:11.553237    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:11.556609    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:11.556679    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:12.053209    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:12.053234    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:12.053244    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:12.053260    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:12.056483    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:12.552948    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:12.552974    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:12.553016    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:12.553022    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:12.555995    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:13.054040    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:13.054066    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:13.054078    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:13.054086    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:13.057218    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:13.553331    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:13.553409    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:13.553428    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:13.553434    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:13.556700    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:13.557047    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:14.053359    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:14.053404    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:14.053418    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:14.053425    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:14.056093    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:14.554003    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:14.554020    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:14.554028    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:14.554033    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:14.556621    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:15.052240    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:15.052259    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:15.052267    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:15.052271    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:15.054851    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:15.552210    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:15.552233    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:15.552292    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:15.552296    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:15.554673    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:16.052627    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:16.052651    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:16.052662    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:16.052669    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:16.055859    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:16.055916    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:16.553446    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:16.553469    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:16.553480    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:16.553487    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:16.556493    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:17.052642    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:17.052665    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:17.052676    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:17.052684    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:17.055560    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:17.553327    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:17.553367    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:17.553375    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:17.553380    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:17.555848    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:18.054167    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:18.054195    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:18.054206    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:18.054214    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:18.057363    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:18.057447    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:18.552623    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:18.552664    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:18.552674    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:18.552682    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:18.556056    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:19.052692    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:19.052730    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:19.052738    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:19.052743    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:19.055382    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:19.553527    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:19.553553    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:19.553564    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:19.553602    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:19.557189    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:20.052711    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:20.052733    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:20.052744    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:20.052752    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:20.056398    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:20.552175    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:20.552196    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:20.552209    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:20.552216    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:20.555567    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:20.555628    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:21.054191    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:21.054216    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:21.054227    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:21.054235    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:21.057762    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:21.552794    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:21.552815    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:21.552827    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:21.552832    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:21.556056    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:22.052279    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:22.052315    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:22.052328    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:22.052335    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:22.055613    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:22.553162    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:22.553188    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:22.553232    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:22.553252    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:22.556362    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:22.556431    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:23.054316    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:23.054338    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:23.054350    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:23.054356    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:23.057542    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:23.552232    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:23.552245    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:23.552272    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:23.552280    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:23.553967    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:24.054003    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:24.054026    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:24.054037    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:24.054045    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:24.057299    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:24.552432    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:24.552455    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:24.552469    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:24.552477    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:24.555494    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:25.053013    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:25.053035    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:25.053047    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:25.053052    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:25.056230    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:25.056306    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:25.552539    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:25.552565    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:25.552577    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:25.552615    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:25.555941    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:26.053283    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:26.053298    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:26.053304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:26.053308    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:26.055446    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:26.553408    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:26.553431    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:26.553443    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:26.553450    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:26.556711    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:27.052272    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:27.052292    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:27.052303    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:27.052309    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:27.055283    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:27.553300    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:27.553326    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:27.553337    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:27.553344    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:27.556249    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:27.556320    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:28.052328    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:28.052357    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:28.052369    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:28.052375    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:28.054916    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:28.554421    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:28.554442    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:28.554453    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:28.554461    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:28.557682    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:29.053409    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:29.053426    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:29.053434    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:29.053438    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:29.055745    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:29.552751    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:29.552764    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:29.552769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:29.552771    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:29.554734    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:30.052686    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:30.052706    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:30.052712    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:30.052717    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:30.056887    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:50:30.056971    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:30.552691    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:30.552714    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:30.552725    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:30.552731    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:30.555684    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:31.052415    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:31.052438    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:31.052450    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:31.052456    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:31.054776    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:31.552531    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:31.552556    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:31.552611    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:31.552622    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:31.555322    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:32.053314    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:32.053340    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:32.053351    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:32.053356    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:32.056305    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:32.553594    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:32.553614    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:32.553625    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:32.553632    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:32.556478    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:32.556594    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:33.053039    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:33.053056    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:33.053065    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:33.053071    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:33.055406    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:33.553287    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:33.553306    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:33.553317    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:33.553324    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:33.555646    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:34.053235    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:34.053254    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:34.053262    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:34.053268    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:34.055633    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:34.552665    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:34.552680    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:34.552689    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:34.552693    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:34.554960    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:35.052632    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:35.052653    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:35.052664    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:35.052669    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:35.055247    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:35.055326    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:35.553273    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:35.553297    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:35.553309    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:35.553316    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:35.556601    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:36.052771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:36.052791    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:36.052803    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:36.052809    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:36.056225    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:36.553576    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:36.553599    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:36.553611    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:36.553618    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:36.556923    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:37.052815    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:37.052842    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:37.052883    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:37.052890    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:37.055843    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:37.055915    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:37.554175    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:37.554196    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:37.554208    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:37.554215    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:37.557673    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:38.052621    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:38.052641    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:38.052652    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:38.052659    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:38.055675    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:38.554585    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:38.554641    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:38.554655    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:38.554663    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:38.558316    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:39.052502    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:39.052557    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:39.052585    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:39.052593    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:39.055843    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:39.553574    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:39.553601    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:39.553612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:39.553650    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:39.557016    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:39.557096    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:40.052628    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:40.052657    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:40.052695    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:40.052721    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:40.055547    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:40.553381    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:40.553406    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:40.553444    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:40.553450    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:40.556591    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:41.053865    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:41.053894    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:41.053906    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:41.053914    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:41.057267    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:41.553609    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:41.553633    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:41.553644    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:41.553652    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:41.556535    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:42.053547    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:42.053575    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:42.053585    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:42.053591    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:42.056838    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:42.056911    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:42.552950    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:42.552967    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:42.552975    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:42.552979    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:42.555606    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:43.054679    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:43.054705    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:43.054716    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:43.054723    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:43.057954    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:43.553147    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:43.553170    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:43.553180    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:43.553187    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:43.556659    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:44.052693    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:44.052712    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:44.052725    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:44.052731    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:44.055591    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:44.553352    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:44.553405    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:44.553418    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:44.553427    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:44.556267    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:44.556423    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:45.052819    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:45.052873    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:45.052887    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:45.052898    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:45.055681    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:45.553717    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:45.553743    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:45.553754    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:45.553760    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:45.557371    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:46.053721    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:46.053741    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:46.053750    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:46.053755    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:46.056953    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:46.554733    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:46.554759    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:46.554770    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:46.554776    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:46.557881    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:46.557956    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:47.053088    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:47.053114    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:47.053139    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:47.053178    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:47.057150    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:47.553469    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:47.553491    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:47.553503    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:47.553509    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:47.556795    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:48.053927    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:48.053949    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:48.053961    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:48.053967    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:48.057833    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:48.554794    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:48.554819    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:48.554829    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:48.554836    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:48.558066    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:48.558139    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:49.053347    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:49.053369    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:49.053380    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:49.053385    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:49.056191    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:49.552995    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:49.553017    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:49.553028    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:49.553035    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:49.556705    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:50.052811    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:50.052836    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:50.052848    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:50.052857    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:50.056125    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:50.553318    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:50.553336    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:50.553343    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:50.553348    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:50.555815    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:51.054852    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:51.054879    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:51.054922    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:51.054929    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:51.058448    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:51.058549    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:51.554735    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:51.554757    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:51.554769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:51.554777    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:51.558250    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:52.053837    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:52.053859    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:52.053871    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:52.053878    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:52.057090    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:52.553164    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:52.553185    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:52.553196    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:52.553203    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:52.556093    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:53.052774    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:53.052789    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:53.052796    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:53.052802    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:53.054809    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:53.553273    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:53.553289    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:53.553296    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:53.553300    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:53.555457    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:53.555522    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:54.054101    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:54.054116    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:54.054126    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:54.054130    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:54.056415    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:54.554015    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:54.554035    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:54.554045    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:54.554052    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:54.557294    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:55.053376    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:55.053396    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:55.053407    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:55.053412    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:55.056562    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:55.553034    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:55.553047    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:55.553054    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:55.553057    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:55.555385    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:56.053965    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:56.053990    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:56.054002    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:56.054007    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:56.057002    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:56.057072    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:56.554082    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:56.554107    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:56.554118    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:56.554125    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:56.557276    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:57.053741    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:57.053768    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:57.053780    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:57.053786    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:57.057162    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:57.554395    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:57.554421    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:57.554433    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:57.554440    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:57.557885    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:58.052984    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:58.052998    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:58.053006    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:58.053010    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:58.055164    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:58.553222    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:58.553241    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:58.553271    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:58.553276    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:58.555082    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:58.555137    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:59.054358    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:59.054380    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:59.054392    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:59.054413    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:59.058040    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:59.553380    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:59.553408    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:59.553419    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:59.553425    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:59.556014    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:00.053290    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:00.053308    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:00.053344    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:00.053349    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:00.055796    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:00.553346    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:00.553373    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:00.553384    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:00.553391    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:00.556794    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:00.556903    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:01.053146    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:01.053172    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:01.053215    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:01.053225    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:01.055877    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:01.553221    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:01.553247    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:01.553258    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:01.553265    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:01.556552    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:02.055126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:02.055160    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:02.055175    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:02.055184    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:02.058471    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:02.553937    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:02.553960    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:02.553970    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:02.553975    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:02.557401    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:02.557478    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:03.053784    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:03.053806    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:03.053857    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:03.053867    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:03.056959    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:03.553699    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:03.553755    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:03.553769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:03.553777    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:03.556657    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:04.055276    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:04.055300    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:04.055312    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:04.055319    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:04.058607    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:04.553743    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:04.553769    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:04.553780    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:04.553784    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:04.557143    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:05.054407    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:05.054427    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:05.054439    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:05.054452    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:05.057462    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:05.057531    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:05.554464    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:05.554485    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:05.554497    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:05.554502    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:05.557990    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:06.053104    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:06.053129    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:06.053141    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:06.053150    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:06.055868    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:06.553581    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:06.553600    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:06.553612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:06.553620    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:06.556556    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:07.053664    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:07.053686    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:07.053698    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:07.053708    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:07.057073    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:07.553166    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:07.553191    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:07.553203    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:07.553210    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:07.556450    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:07.556521    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:08.053159    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:08.053174    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:08.053183    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:08.053188    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:08.055328    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:08.553866    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:08.553892    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:08.553904    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:08.553912    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:08.556775    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:09.054290    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:09.054339    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:09.054352    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:09.054358    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:09.057196    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:09.554985    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:09.555010    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:09.555022    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:09.555027    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:09.558086    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:09.558151    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:10.054595    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:10.054620    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:10.054630    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:10.054636    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:10.057941    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:10.555296    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:10.555323    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:10.555373    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:10.555381    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:10.558254    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:11.054279    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:11.054304    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:11.054314    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:11.054320    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:11.057361    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:11.554127    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:11.554148    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:11.554159    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:11.554164    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:11.557132    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:12.053339    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:12.053363    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:12.053380    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:12.053386    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:12.055874    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:12.055948    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:12.555345    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:12.555364    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:12.555375    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:12.555384    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:12.558576    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:13.054454    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:13.054474    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:13.054485    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:13.054491    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:13.057567    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:13.553571    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:13.553591    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:13.553601    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:13.553606    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:13.556946    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:14.055315    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:14.055337    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:14.055348    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:14.055354    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:14.058746    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:14.058822    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:14.554232    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:14.554256    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:14.554267    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:14.554273    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:14.557669    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:15.054617    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:15.054652    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:15.054662    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:15.054668    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:15.057043    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:15.554967    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:15.554988    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:15.555000    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:15.555005    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:15.557951    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:16.054869    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:16.054894    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:16.054934    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:16.054942    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:16.057848    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:16.553740    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:16.553764    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:16.553803    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:16.553811    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:16.556855    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:16.556925    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:17.054370    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:17.054396    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:17.054407    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:17.054415    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:17.057649    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:17.554197    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:17.554250    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:17.554263    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:17.554272    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:17.556745    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:18.053431    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:18.053450    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:18.053461    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:18.053466    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:18.057060    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:18.554353    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:18.554367    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:18.554375    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:18.554381    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:18.556869    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:19.055419    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:19.055442    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:19.055458    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:19.055463    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:19.058903    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:19.059063    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:19.554833    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:19.554848    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:19.554854    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:19.554858    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:19.556762    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:20.054915    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:20.054936    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:20.054947    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:20.054953    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:20.057947    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:20.553863    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:20.553887    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:20.553899    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:20.553906    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:20.557142    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:21.055333    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:21.055359    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:21.055370    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:21.055376    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:21.058593    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:21.554854    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:21.554874    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:21.554885    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:21.554893    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:21.557756    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:21.557904    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:22.055272    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:22.055298    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:22.055309    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:22.055320    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:22.058761    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:22.554889    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:22.554913    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:22.554957    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:22.554966    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:22.557884    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:23.053593    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:23.053677    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:23.053684    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:23.053690    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:23.055671    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:23.554897    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:23.554915    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:23.554921    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:23.554925    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:23.556865    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:24.055573    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:24.055600    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:24.055612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:24.055621    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:24.058999    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:24.059072    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:24.554103    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:24.554125    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:24.554136    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:24.554143    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:24.557593    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:25.055623    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:25.055650    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:25.055661    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:25.055666    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:25.058974    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:25.554496    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:25.554516    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:25.554528    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:25.554533    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:25.557257    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:26.054612    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:26.054675    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:26.054682    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:26.054689    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:26.056656    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:26.554520    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:26.554539    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:26.554548    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:26.554552    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:26.556903    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:26.556961    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:27.055130    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:27.055156    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:27.055167    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:27.055175    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:27.058320    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:27.554836    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:27.554863    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:27.554872    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:27.554880    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:27.558351    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:28.055628    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:28.055651    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:28.055665    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:28.055671    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:28.058655    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:28.554813    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:28.554839    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:28.554852    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:28.554858    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:28.558122    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:28.558200    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:29.054994    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:29.055021    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:29.055062    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:29.055069    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:29.058014    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:29.554426    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:29.554442    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:29.554451    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:29.554455    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:29.556542    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:30.054152    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:30.054172    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:30.054182    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:30.054188    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:30.056862    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:30.554508    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:30.554519    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:30.554526    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:30.554529    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:30.556491    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:31.054836    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:31.054858    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:31.054869    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:31.054876    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:31.057795    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:31.057884    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:31.554037    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:31.554063    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:31.554075    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:31.554084    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:31.559945    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:51:32.054494    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:32.054513    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:32.054522    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:32.054525    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:32.056953    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:32.554097    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:32.554118    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:32.554130    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:32.554137    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:32.558190    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:51:33.054128    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:33.054153    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:33.054164    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:33.054170    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:33.056763    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:33.553714    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:33.553752    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:33.553760    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:33.553764    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:33.556405    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:33.556457    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:34.054545    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:34.054569    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:34.054617    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:34.054624    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:34.057511    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:34.554849    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:34.554871    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:34.554883    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:34.554888    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:34.558363    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:35.053988    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:35.054013    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:35.054024    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:35.054031    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:35.056770    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:35.554587    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:35.554609    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:35.554619    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:35.554625    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:35.557960    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:35.558034    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:36.054198    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:36.054222    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:36.054229    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:36.054232    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:36.055802    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:36.554404    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:36.554428    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:36.554440    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:36.554446    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:36.557090    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:37.054425    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:37.054479    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:37.054490    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:37.054498    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:37.057228    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:37.555500    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:37.555512    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:37.555518    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:37.555521    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:37.557601    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:38.053768    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:38.053782    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:38.053791    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:38.053795    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:38.056165    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:38.056257    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:38.554665    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:38.554676    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:38.554682    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:38.554685    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:38.557419    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:39.054356    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:39.054378    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:39.054389    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:39.054395    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:39.057852    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:39.554782    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:39.554836    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:39.554844    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:39.554848    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:39.557248    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:40.054272    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:40.054293    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:40.054304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:40.054310    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:40.056976    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:40.057062    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:40.555343    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:40.555383    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:40.555394    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:40.555400    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:40.557223    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:41.054729    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:41.054786    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:41.054799    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:41.054806    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:41.057633    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:41.554501    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:41.554567    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:41.554582    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:41.554591    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:41.557830    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:42.054529    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:42.054554    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:42.054563    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:42.054568    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:42.057815    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:42.057887    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:42.555358    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:42.555370    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:42.555377    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:42.555381    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:42.557069    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:43.055502    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:43.055544    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:43.055552    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:43.055560    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:43.057767    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:43.554618    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:43.554638    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:43.554685    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:43.554690    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:43.557317    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:44.054601    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:44.054620    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:44.054626    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:44.054630    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:44.056993    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:44.554782    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:44.554797    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:44.554806    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:44.554810    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:44.556419    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:44.556476    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:45.054525    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:45.054559    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:45.054596    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:45.054633    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:45.058027    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:45.554369    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:45.554385    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:45.554393    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:45.554397    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:45.556944    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:46.054888    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:46.054906    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:46.054915    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:46.054919    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:46.057107    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:46.554088    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:46.554113    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:46.554124    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:46.554130    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:46.557394    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:46.557468    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:47.054175    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:47.054197    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:47.054209    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:47.054217    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:47.057370    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:47.555569    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:47.555594    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:47.555647    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:47.555655    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:47.559047    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:48.055273    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:48.055289    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:48.055300    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:48.055311    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:48.057338    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:48.554690    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:48.554708    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:48.554718    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:48.554724    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:48.557402    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:49.054179    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:49.054233    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:49.054246    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:49.054253    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:49.056979    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:49.057112    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:49.555596    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:49.555619    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:49.555629    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:49.555633    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:49.558319    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:50.054126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:50.054150    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:50.054161    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:50.054168    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:50.057661    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:50.555084    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:50.555110    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:50.555124    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:50.555133    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:50.558415    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:51.054816    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:51.054839    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:51.054854    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:51.054860    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:51.058330    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:51.058413    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:51.554613    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:51.554634    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:51.554645    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:51.554652    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:51.557804    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:52.054564    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:52.054619    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:52.054632    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:52.054638    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:52.057826    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:52.555343    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:52.555366    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:52.555378    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:52.555385    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:52.558107    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:53.055011    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:53.055025    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:53.055034    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:53.055037    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:53.057184    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:53.555329    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:53.555354    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:53.555366    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:53.555372    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:53.558170    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:53.558239    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:54.054793    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:54.054810    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:54.054818    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:54.054823    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:54.057650    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:54.556214    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:54.556241    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:54.556284    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:54.556295    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:54.559721    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:55.054592    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:55.054612    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:55.054624    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:55.054630    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:55.057530    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:55.554855    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:55.554874    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:55.554882    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:55.554886    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:55.557320    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:56.055331    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:56.055352    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:56.055361    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:56.055365    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:56.058215    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:56.058278    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:56.554547    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:56.554568    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:56.554579    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:56.554584    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:56.556705    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:57.054552    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:57.054565    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:57.054570    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:57.054572    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:57.056500    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:57.555559    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:57.555585    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:57.555626    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:57.555635    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:57.558863    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:58.054689    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:58.054707    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:58.054737    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:58.054742    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:58.057151    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:58.556315    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:58.556341    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:58.556352    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:58.556365    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:58.559715    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:58.559793    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:59.055113    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:59.055174    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:59.055189    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:59.055197    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:59.058730    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:59.555567    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:59.555594    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:59.555607    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:59.555612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:59.558994    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:00.055486    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:00.055514    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:00.055526    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:00.055533    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:00.058720    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:00.555382    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:00.555401    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:00.555412    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:00.555418    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:00.558653    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:01.055751    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:01.055778    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:01.055790    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:01.055797    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:01.058484    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:01.058546    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:01.556276    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:01.556294    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:01.556304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:01.556307    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:01.558623    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:02.054896    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:02.054920    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:02.054973    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:02.054980    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:02.057416    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:02.554490    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:02.554516    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:02.554557    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:02.554568    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:02.557605    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:03.054883    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:03.054898    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:03.054907    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:03.054913    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:03.057408    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:03.554821    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:03.554844    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:03.554856    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:03.554862    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:03.557821    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:03.557893    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:04.054425    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:04.054474    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:04.054486    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:04.054493    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:04.057361    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:04.555269    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:04.555292    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:04.555303    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:04.555310    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:04.557975    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:05.055439    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:05.055462    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:05.055474    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:05.055480    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:05.058438    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:05.555041    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:05.555066    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:05.555110    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:05.555119    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:05.558183    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:05.558255    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:06.054744    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:06.054767    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:06.054780    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:06.054786    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:06.057960    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:06.554522    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:06.554548    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:06.554560    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:06.554568    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:06.557313    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:07.055173    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:07.055199    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:07.055239    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:07.055247    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:07.058653    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:07.555300    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:07.555317    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:07.555328    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:07.555333    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:07.558041    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:08.055354    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:08.055368    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:08.055376    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:08.055379    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:08.057374    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:08.057433    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:08.555236    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:08.555259    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:08.555270    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:08.555277    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:08.558651    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:09.055614    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:09.055640    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:09.055650    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:09.055683    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:09.058939    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:09.556607    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:09.556630    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:09.556641    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:09.556646    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:09.559951    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:10.056557    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:10.056584    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:10.056595    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:10.056603    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:10.060049    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:10.060123    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:10.555721    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:10.555747    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:10.555758    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:10.555766    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:10.559208    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:11.054718    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:11.054745    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:11.054757    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:11.054765    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:11.058258    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:11.554755    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:11.554775    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:11.554787    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:11.554792    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:11.557852    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:12.054659    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:12.054685    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:12.054725    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:12.054736    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:12.057557    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:12.555786    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:12.555805    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:12.555816    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:12.555825    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:12.558720    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:12.558790    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:13.054520    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:13.054531    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:13.054537    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:13.054541    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:13.056746    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:13.555035    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:13.555056    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:13.555069    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:13.555076    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:13.558241    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:14.055844    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:14.055904    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:14.055918    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:14.055926    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:14.059251    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:14.556682    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:14.556705    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:14.556718    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:14.556724    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:14.560091    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:14.560167    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:15.055321    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:15.055341    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:15.055353    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:15.055358    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:15.058575    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:15.554664    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:15.554684    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:15.554698    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:15.554706    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:15.557939    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:16.055206    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:16.055227    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:16.055238    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:16.055246    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:16.058598    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:16.555194    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:16.555214    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:16.555226    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:16.555232    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:16.558383    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:17.056686    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:17.056714    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:17.056726    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:17.056731    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:17.060029    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:17.060100    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:17.556714    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:17.556740    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:17.556750    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:17.556755    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:17.560141    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:18.054996    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:18.055011    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:18.055019    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:18.055025    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:18.057822    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:18.555828    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:18.555841    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:18.555849    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:18.555854    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:18.558383    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:19.055041    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:19.055065    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:19.055077    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:19.055085    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:19.058023    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:19.555151    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:19.555177    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:19.555188    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:19.555193    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:19.558408    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:19.558484    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:20.055165    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:20.055192    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:20.055253    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:20.055266    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:20.058241    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:20.555361    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:20.555384    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:20.555396    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:20.555404    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:20.558504    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:21.056388    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:21.056411    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:21.056424    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:21.056429    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:21.059536    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:21.554779    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:21.554793    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:21.554802    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:21.554805    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:21.557366    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:22.055736    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:22.055758    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:22.055769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:22.055776    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:22.058591    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:22.058661    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:22.555812    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:22.555836    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:22.555847    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:22.555854    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:22.558948    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:23.056853    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:23.056919    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:23.056944    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:23.056953    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:23.062337    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:52:23.554982    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:23.555000    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:23.555011    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:23.555018    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:23.557644    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:24.054899    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:24.054938    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:24.054947    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:24.054953    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:24.057729    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:24.556586    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:24.556600    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:24.556623    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:24.556627    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:24.558638    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:24.558692    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:25.056076    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:25.056096    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:25.056107    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:25.056114    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:25.058803    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:25.556269    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:25.556291    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:25.556303    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:25.556309    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:25.559377    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:26.055956    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:26.055982    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:26.055993    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:26.056000    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:26.059192    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:26.556280    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:26.556302    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:26.556313    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:26.556321    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:26.559053    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:26.559129    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:27.055476    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:27.055501    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:27.055512    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:27.055518    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:27.059048    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:27.554857    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:27.554875    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:27.554889    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:27.554899    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:27.557516    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:28.056934    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:28.056960    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:28.056970    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:28.056977    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:28.061498    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:52:28.556243    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:28.556264    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:28.556274    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:28.556280    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:28.560054    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:28.560129    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:29.056620    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:29.056646    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:29.056690    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:29.056714    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:29.060206    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:29.555385    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:29.555411    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:29.555422    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:29.555429    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:29.558512    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:30.055471    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:30.055493    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:30.055506    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:30.055514    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:30.058459    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:30.555484    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:30.555504    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:30.555516    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:30.555524    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:30.558311    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:31.054968    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:31.055015    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:31.055027    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:31.055032    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:31.057916    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:31.058060    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:31.556014    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:31.556033    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:31.556044    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:31.556050    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:31.559609    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:32.056534    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:32.056581    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:32.056591    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:32.056597    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:32.059302    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:32.555775    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:32.555794    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:32.555806    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:32.555814    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:32.558491    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:33.057040    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:33.057067    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:33.057077    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:33.057085    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:33.060635    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:33.060713    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:33.555570    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:33.555591    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:33.555602    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:33.555608    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:33.559425    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:34.057120    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:34.057141    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:34.057148    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:34.057153    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:34.060018    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:34.555126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:34.555138    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:34.555146    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:34.555150    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:34.557094    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:35.055444    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:35.055467    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:35.055479    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:35.055486    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:35.058594    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:35.555149    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:35.555197    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:35.555209    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:35.555218    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:35.558115    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:35.558186    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:36.056849    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:36.056876    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:36.056920    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:36.056932    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:36.060766    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:36.555499    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:36.555519    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:36.555528    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:36.555532    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:36.558358    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:37.055144    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:37.055195    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:37.055208    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:37.055215    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:37.058216    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:37.555944    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:37.556001    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:37.556013    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:37.556023    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:37.559260    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:37.559332    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:38.055318    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:38.055338    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:38.055350    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:38.055355    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:38.058181    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:38.555299    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:38.555317    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:38.555329    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:38.555337    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:38.558216    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:39.056988    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:39.057016    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:39.057073    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:39.057083    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:39.060253    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:39.555159    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:39.555181    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:39.555193    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:39.555200    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:39.558336    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:40.055085    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:40.055100    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:40.055105    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:40.055108    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:40.057225    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:40.057326    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:40.556336    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:40.556362    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:40.556374    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:40.556380    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:40.559611    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:41.056619    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:41.056644    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:41.056655    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:41.056661    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:41.060851    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:52:41.555283    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:41.555295    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:41.555302    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:41.555305    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:41.556982    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:42.056943    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:42.056967    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:42.056978    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:42.056985    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:42.060100    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:42.060167    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:42.556338    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:42.556357    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:42.556367    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:42.556377    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:42.559414    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:43.055551    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:43.055573    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:43.055586    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:43.055594    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:43.058624    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:43.555249    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:43.555259    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:43.555264    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:43.555266    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:43.557514    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:44.057256    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:44.057279    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:44.057320    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:44.057332    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:44.060185    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:44.060336    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:44.555282    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:44.555310    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:44.555349    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:44.555359    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:44.557869    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:45.055728    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:45.055742    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:45.055751    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:45.055756    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:45.058016    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:45.556887    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:45.556939    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:45.556953    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:45.556961    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:45.560018    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:46.055302    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:46.055315    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:46.055321    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:46.055324    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:46.059667    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:52:46.555661    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:46.555681    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:46.555693    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:46.555699    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:46.558535    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:46.558625    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:47.055328    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:47.055352    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:47.055364    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:47.055370    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:47.062725    6731 round_trippers.go:574] Response Status: 404 Not Found in 7 milliseconds
	I0819 10:52:47.555663    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:47.555688    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:47.555699    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:47.555706    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:47.557822    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:48.056671    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:48.056687    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:48.056695    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:48.056700    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:48.059006    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:48.555409    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:48.555429    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:48.555441    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:48.555450    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:48.557941    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:49.057092    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:49.057119    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:49.057131    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:49.057137    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:49.060065    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:49.060130    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:49.060145    6731 node_ready.go:38] duration metric: took 4m0.005002355s for node "ha-431000-m03" to be "Ready" ...
	I0819 10:52:49.082024    6731 out.go:201] 
	W0819 10:52:49.103661    6731 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0819 10:52:49.103680    6731 out.go:270] * 
	* 
	W0819 10:52:49.104908    6731 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:52:49.166900    6731 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-431000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-431000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (3.189163801s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT |                     |
	|         | busybox-7dff88458-wfcpq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-431000 -v=7                | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-431000 node stop m02 -v=7         | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:43 PDT | 19 Aug 24 10:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-431000 node start m02 -v=7        | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:45 PDT | 19 Aug 24 10:45 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-431000 -v=7               | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:46 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-431000 -v=7                    | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:46 PDT | 19 Aug 24 10:47 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-431000 --wait=true -v=7        | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:47 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-431000                    | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:52 PDT |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:47:12
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:47:12.990834    6731 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:47:12.991103    6731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:47:12.991108    6731 out.go:358] Setting ErrFile to fd 2...
	I0819 10:47:12.991112    6731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:47:12.991281    6731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:47:12.992723    6731 out.go:352] Setting JSON to false
	I0819 10:47:13.017592    6731 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4603,"bootTime":1724085030,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:47:13.017712    6731 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:47:13.040160    6731 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:47:13.085144    6731 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:47:13.085199    6731 notify.go:220] Checking for updates...
	I0819 10:47:13.129094    6731 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:13.150001    6731 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:47:13.191985    6731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:47:13.234991    6731 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:47:13.255968    6731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:47:13.277879    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:13.278061    6731 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:47:13.278758    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:13.278849    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:13.288403    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52017
	I0819 10:47:13.288766    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:13.289188    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:13.289197    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:13.289457    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:13.289596    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:13.317906    6731 out.go:177] * Using the hyperkit driver based on existing profile
	I0819 10:47:13.359906    6731 start.go:297] selected driver: hyperkit
	I0819 10:47:13.359936    6731 start.go:901] validating driver "hyperkit" against &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:47:13.360173    6731 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:47:13.360383    6731 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:47:13.360591    6731 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:47:13.373620    6731 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:47:13.379058    6731 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:13.379083    6731 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:47:13.382480    6731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:47:13.382556    6731 cni.go:84] Creating CNI manager for ""
	I0819 10:47:13.382566    6731 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 10:47:13.382642    6731 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:47:13.382745    6731 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:47:13.427064    6731 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:47:13.448053    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:47:13.448130    6731 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:47:13.448197    6731 cache.go:56] Caching tarball of preloaded images
	I0819 10:47:13.448409    6731 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:47:13.448432    6731 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:47:13.448617    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:13.449596    6731 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:47:13.449728    6731 start.go:364] duration metric: took 105.822µs to acquireMachinesLock for "ha-431000"
	I0819 10:47:13.449768    6731 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:47:13.449785    6731 fix.go:54] fixHost starting: 
	I0819 10:47:13.450204    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:13.450230    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:13.463559    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52019
	I0819 10:47:13.464010    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:13.464458    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:13.464469    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:13.464831    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:13.465014    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:13.465167    6731 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:47:13.465295    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:13.465439    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:47:13.466971    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid 4802 missing from process table
	I0819 10:47:13.467037    6731 fix.go:112] recreateIfNeeded on ha-431000: state=Stopped err=<nil>
	I0819 10:47:13.467066    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	W0819 10:47:13.467199    6731 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:47:13.510101    6731 out.go:177] * Restarting existing hyperkit VM for "ha-431000" ...
	I0819 10:47:13.531063    6731 main.go:141] libmachine: (ha-431000) Calling .Start
	I0819 10:47:13.531337    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:13.531403    6731 main.go:141] libmachine: (ha-431000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:47:13.533562    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid 4802 missing from process table
	I0819 10:47:13.533575    6731 main.go:141] libmachine: (ha-431000) DBG | pid 4802 is in state "Stopped"
	I0819 10:47:13.533592    6731 main.go:141] libmachine: (ha-431000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid...
	I0819 10:47:13.534063    6731 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:47:13.685824    6731 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:47:13.685856    6731 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:47:13.685937    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c10e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:13.685980    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c10e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:13.686041    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:47:13.686089    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:47:13.686116    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:47:13.687515    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Pid is 6743
	I0819 10:47:13.687875    6731 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:47:13.687888    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:13.687950    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:47:13.689549    6731 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:47:13.689620    6731 main.go:141] libmachine: (ha-431000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:47:13.689637    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d62c}
	I0819 10:47:13.689650    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 10:47:13.689661    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:47:13.689670    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:47:13.689679    6731 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:47:13.689685    6731 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:47:13.689750    6731 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:47:13.690466    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:13.690696    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:13.691360    6731 machine.go:93] provisionDockerMachine start ...
	I0819 10:47:13.691391    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:13.691550    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:13.691652    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:13.691765    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:13.691853    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:13.691949    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:13.692101    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:13.692310    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:13.692319    6731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:47:13.695286    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:47:13.768567    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:47:13.769376    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:13.769389    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:13.769397    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:13.769403    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:14.169410    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:47:14.169434    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:47:14.284387    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:14.284423    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:14.284433    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:14.284452    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:14.285281    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:47:14.285292    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:47:20.122707    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:47:20.122768    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:47:20.122798    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:47:20.146889    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:47:24.038753    6731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.5:22: connect: connection refused
	I0819 10:47:27.097051    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:47:27.097068    6731 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:47:27.097216    6731 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:47:27.097227    6731 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:47:27.097372    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.097464    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.097585    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.097687    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.097778    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.097909    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.098097    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.098119    6731 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:47:27.159700    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:47:27.159721    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.159879    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.159986    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.160071    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.160158    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.160304    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.160447    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.160458    6731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:47:27.217596    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:47:27.217617    6731 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:47:27.217642    6731 buildroot.go:174] setting up certificates
	I0819 10:47:27.217648    6731 provision.go:84] configureAuth start
	I0819 10:47:27.217654    6731 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:47:27.217789    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:27.217907    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.218009    6731 provision.go:143] copyHostCerts
	I0819 10:47:27.218040    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:27.218106    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:47:27.218115    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:27.219007    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:47:27.219230    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:27.219271    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:47:27.219275    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:27.219362    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:47:27.219509    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:27.219546    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:47:27.219551    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:27.219626    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:47:27.219767    6731 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:47:27.270993    6731 provision.go:177] copyRemoteCerts
	I0819 10:47:27.271039    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:47:27.271051    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.271175    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.271261    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.271352    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.271445    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:27.302754    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:47:27.302826    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:47:27.322815    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:47:27.322877    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0819 10:47:27.342451    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:47:27.342511    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:47:27.362246    6731 provision.go:87] duration metric: took 144.581948ms to configureAuth
	I0819 10:47:27.362260    6731 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:47:27.362446    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:27.362461    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:27.362588    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.362675    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.362776    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.362858    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.362949    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.363077    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.363202    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.363214    6731 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:47:27.413858    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:47:27.413870    6731 buildroot.go:70] root file system type: tmpfs
	I0819 10:47:27.413956    6731 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:47:27.413972    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.414097    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.414209    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.414293    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.414367    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.414499    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.414633    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.414678    6731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:47:27.476805    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:47:27.476825    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.476950    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.477051    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.477141    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.477235    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.477363    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.477517    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.477530    6731 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:47:29.141388    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:47:29.141402    6731 machine.go:96] duration metric: took 15.449700536s to provisionDockerMachine
	I0819 10:47:29.141419    6731 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:47:29.141427    6731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:47:29.141442    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.141639    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:47:29.141653    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.141751    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.141838    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.141944    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.142024    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.177773    6731 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:47:29.182929    6731 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:47:29.182945    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:47:29.183045    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:47:29.183232    6731 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:47:29.183239    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:47:29.183446    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:47:29.193329    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:29.226539    6731 start.go:296] duration metric: took 85.108142ms for postStartSetup
	I0819 10:47:29.226566    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.226743    6731 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:47:29.226766    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.226881    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.226983    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.227075    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.227158    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.259218    6731 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:47:29.259277    6731 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:47:29.313364    6731 fix.go:56] duration metric: took 15.863243842s for fixHost
	I0819 10:47:29.313386    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.313537    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.313631    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.313718    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.313802    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.313927    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:29.314073    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:29.314080    6731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:47:29.366201    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089649.282494519
	
	I0819 10:47:29.366218    6731 fix.go:216] guest clock: 1724089649.282494519
	I0819 10:47:29.366223    6731 fix.go:229] Guest: 2024-08-19 10:47:29.282494519 -0700 PDT Remote: 2024-08-19 10:47:29.313376 -0700 PDT m=+16.361598467 (delta=-30.881481ms)
	I0819 10:47:29.366239    6731 fix.go:200] guest clock delta is within tolerance: -30.881481ms
	I0819 10:47:29.366243    6731 start.go:83] releasing machines lock for "ha-431000", held for 15.916161384s
	I0819 10:47:29.366262    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366404    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:29.366507    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366799    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366892    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366979    6731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:47:29.367012    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.367029    6731 ssh_runner.go:195] Run: cat /version.json
	I0819 10:47:29.367039    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.367114    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.367149    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.367227    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.367237    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.367322    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.367335    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.367423    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.367436    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.444266    6731 ssh_runner.go:195] Run: systemctl --version
	I0819 10:47:29.449674    6731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:47:29.454027    6731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:47:29.454072    6731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:47:29.466466    6731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:47:29.466477    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:29.466578    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:29.483411    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:47:29.492453    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:47:29.501213    6731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:47:29.501260    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:47:29.510090    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:29.519075    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:47:29.528065    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:29.536949    6731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:47:29.545786    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:47:29.554573    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:47:29.563322    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:47:29.572057    6731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:47:29.579919    6731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:47:29.588348    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:29.686832    6731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:47:29.707105    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:29.707180    6731 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:47:29.719452    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:29.730098    6731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:47:29.745544    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:29.756577    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:29.767542    6731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:47:29.790919    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:29.802179    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:29.816853    6731 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:47:29.819743    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:47:29.827667    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:47:29.841027    6731 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:47:29.941968    6731 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:47:30.045493    6731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:47:30.045564    6731 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:47:30.059349    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:30.153983    6731 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:47:32.475528    6731 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.321474833s)
	I0819 10:47:32.475593    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:47:32.486499    6731 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:47:32.499892    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:32.510342    6731 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:47:32.602953    6731 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:47:32.726572    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:32.829541    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:47:32.850769    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:32.861330    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:32.957342    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:47:33.019734    6731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:47:33.019811    6731 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:47:33.024665    6731 start.go:563] Will wait 60s for crictl version
	I0819 10:47:33.024717    6731 ssh_runner.go:195] Run: which crictl
	I0819 10:47:33.028242    6731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:47:33.053696    6731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:47:33.053765    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:33.070786    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:33.110368    6731 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:47:33.110419    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:33.110842    6731 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:47:33.115455    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:33.125038    6731 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:47:33.125131    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:47:33.125186    6731 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:47:33.138502    6731 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0819 10:47:33.138514    6731 docker.go:615] Images already preloaded, skipping extraction
	I0819 10:47:33.138587    6731 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:47:33.152253    6731 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0819 10:47:33.152273    6731 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:47:33.152286    6731 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:47:33.152388    6731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:47:33.152487    6731 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:47:33.188995    6731 cni.go:84] Creating CNI manager for ""
	I0819 10:47:33.189008    6731 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 10:47:33.189020    6731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:47:33.189037    6731 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:47:33.189121    6731 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:47:33.189137    6731 kube-vip.go:115] generating kube-vip config ...
	I0819 10:47:33.189189    6731 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:47:33.201830    6731 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:47:33.201940    6731 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:47:33.201997    6731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:47:33.210450    6731 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:47:33.210495    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:47:33.217871    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:47:33.231674    6731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:47:33.245013    6731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:47:33.259054    6731 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:47:33.272685    6731 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:47:33.275698    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:33.285047    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:33.385931    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:47:33.400131    6731 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:47:33.400143    6731 certs.go:194] generating shared ca certs ...
	I0819 10:47:33.400154    6731 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:33.400345    6731 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:47:33.400418    6731 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:47:33.400428    6731 certs.go:256] generating profile certs ...
	I0819 10:47:33.400545    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:47:33.400566    6731 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59
	I0819 10:47:33.400581    6731 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0819 10:47:33.706693    6731 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59 ...
	I0819 10:47:33.706714    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59: {Name:mk3ef913d0a2b6704747c9cac46f692f95ca83d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:33.707051    6731 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59 ...
	I0819 10:47:33.707062    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59: {Name:mk47cdc11bd849114252b3917882ba0c41ebb9fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:33.707265    6731 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:47:33.707470    6731 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:47:33.707706    6731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:47:33.707719    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:47:33.707742    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:47:33.707763    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:47:33.707783    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:47:33.707800    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:47:33.707818    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:47:33.707836    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:47:33.707854    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:47:33.707965    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:47:33.708012    6731 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:47:33.708021    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:47:33.708051    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:47:33.708080    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:47:33.708108    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:47:33.708172    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:33.708203    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:47:33.708224    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:47:33.708242    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:33.708696    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:47:33.750639    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:47:33.793357    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:47:33.817739    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:47:33.839363    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 10:47:33.859538    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 10:47:33.879468    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:47:33.899477    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:47:33.919387    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:47:33.939367    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:47:33.959111    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:47:33.978053    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:47:33.991986    6731 ssh_runner.go:195] Run: openssl version
	I0819 10:47:33.996321    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:47:34.004824    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:47:34.008214    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:47:34.008253    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:47:34.012526    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:47:34.020744    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:47:34.029254    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:47:34.032767    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:47:34.032806    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:47:34.037138    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:47:34.045595    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:47:34.053763    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:34.057262    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:34.057304    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:34.061509    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:47:34.070103    6731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:47:34.073578    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 10:47:34.078201    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 10:47:34.082612    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 10:47:34.087103    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 10:47:34.091437    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 10:47:34.095760    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 10:47:34.100115    6731 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:47:34.100230    6731 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:47:34.113393    6731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:47:34.120906    6731 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 10:47:34.120917    6731 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 10:47:34.120957    6731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 10:47:34.128485    6731 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:47:34.128797    6731 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-431000" does not appear in /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:34.128883    6731 kubeconfig.go:62] /Users/jenkins/minikube-integration/19478-1622/kubeconfig needs updating (will repair): [kubeconfig missing "ha-431000" cluster setting kubeconfig missing "ha-431000" context setting]
	I0819 10:47:34.129058    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:34.129469    6731 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:34.129662    6731 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1139f2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:47:34.129951    6731 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:47:34.130122    6731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 10:47:34.137350    6731 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0819 10:47:34.137364    6731 kubeadm.go:597] duration metric: took 16.443406ms to restartPrimaryControlPlane
	I0819 10:47:34.137370    6731 kubeadm.go:394] duration metric: took 37.259659ms to StartCluster
	I0819 10:47:34.137379    6731 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:34.137458    6731 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:34.137795    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:34.138049    6731 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:47:34.138062    6731 start.go:241] waiting for startup goroutines ...
	I0819 10:47:34.138093    6731 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:47:34.138228    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:34.182792    6731 out.go:177] * Enabled addons: 
	I0819 10:47:34.203662    6731 addons.go:510] duration metric: took 65.572958ms for enable addons: enabled=[]
	I0819 10:47:34.203791    6731 start.go:246] waiting for cluster config update ...
	I0819 10:47:34.203803    6731 start.go:255] writing updated cluster config ...
	I0819 10:47:34.226648    6731 out.go:201] 
	I0819 10:47:34.250149    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:34.250276    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:34.272715    6731 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:47:34.314737    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:47:34.314772    6731 cache.go:56] Caching tarball of preloaded images
	I0819 10:47:34.314979    6731 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:47:34.315025    6731 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:47:34.315140    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:34.316055    6731 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:47:34.316175    6731 start.go:364] duration metric: took 95.252µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:47:34.316201    6731 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:47:34.316218    6731 fix.go:54] fixHost starting: m02
	I0819 10:47:34.316649    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:34.316675    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:34.325824    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52042
	I0819 10:47:34.326364    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:34.326725    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:34.326734    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:34.326990    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:34.327207    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:34.327371    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:47:34.327556    6731 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:34.327684    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6436
	I0819 10:47:34.328623    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 6436 missing from process table
	I0819 10:47:34.328664    6731 fix.go:112] recreateIfNeeded on ha-431000-m02: state=Stopped err=<nil>
	I0819 10:47:34.328674    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	W0819 10:47:34.328799    6731 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:47:34.376702    6731 out.go:177] * Restarting existing hyperkit VM for "ha-431000-m02" ...
	I0819 10:47:34.397748    6731 main.go:141] libmachine: (ha-431000-m02) Calling .Start
	I0819 10:47:34.398040    6731 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:34.398181    6731 main.go:141] libmachine: (ha-431000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:47:34.399890    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 6436 missing from process table
	I0819 10:47:34.399903    6731 main.go:141] libmachine: (ha-431000-m02) DBG | pid 6436 is in state "Stopped"
	I0819 10:47:34.399920    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid...
	I0819 10:47:34.400291    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:47:34.428075    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:47:34.428103    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:47:34.428232    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003af200)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:34.428264    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003af200)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:34.428356    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:47:34.428395    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:47:34.428414    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:47:34.429765    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Pid is 6783
	I0819 10:47:34.430472    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:47:34.430523    6731 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:34.430650    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6783
	I0819 10:47:34.432548    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:47:34.432573    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:47:34.432586    6731 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d6ab}
	I0819 10:47:34.432599    6731 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d62c}
	I0819 10:47:34.432608    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:47:34.432619    6731 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:47:34.432669    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:47:34.433339    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:34.433544    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:34.434121    6731 machine.go:93] provisionDockerMachine start ...
	I0819 10:47:34.434131    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:34.434259    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:34.434360    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:34.434461    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:34.434563    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:34.434665    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:34.434786    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:34.434931    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:34.434939    6731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:47:34.437670    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:47:34.446364    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:47:34.447557    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:34.447574    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:34.447585    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:34.447595    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:34.831206    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:47:34.831223    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:47:34.946012    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:34.946044    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:34.946065    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:34.946082    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:34.946901    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:47:34.946912    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:47:40.531269    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:47:40.531330    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:47:40.531340    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:47:40.556233    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:47:45.507448    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:47:45.507462    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:47:45.507581    6731 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:47:45.507593    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:47:45.507670    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.507776    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.507909    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.507996    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.508101    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.508234    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.508381    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.508389    6731 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:47:45.583754    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:47:45.583774    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.583905    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.584002    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.584099    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.584184    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.584323    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.584482    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.584494    6731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:47:45.658171    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:47:45.658187    6731 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:47:45.658197    6731 buildroot.go:174] setting up certificates
	I0819 10:47:45.658205    6731 provision.go:84] configureAuth start
	I0819 10:47:45.658211    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:47:45.658365    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:45.658474    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.658558    6731 provision.go:143] copyHostCerts
	I0819 10:47:45.658585    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:45.658635    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:47:45.658641    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:45.658762    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:47:45.658966    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:45.658995    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:47:45.658999    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:45.659067    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:47:45.659209    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:45.659236    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:47:45.659241    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:45.659309    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:47:45.659487    6731 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:47:45.772365    6731 provision.go:177] copyRemoteCerts
	I0819 10:47:45.772449    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:47:45.772468    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.772616    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.772719    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.772815    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.772905    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:45.813424    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:47:45.813495    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:47:45.833296    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:47:45.833365    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:47:45.853251    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:47:45.853315    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:47:45.873370    6731 provision.go:87] duration metric: took 215.153593ms to configureAuth
	I0819 10:47:45.873384    6731 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:47:45.873555    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:45.873574    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:45.873707    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.873815    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.873904    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.874006    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.874106    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.874221    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.874350    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.874357    6731 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:47:45.937816    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:47:45.937826    6731 buildroot.go:70] root file system type: tmpfs
	I0819 10:47:45.937934    6731 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:47:45.937947    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.938086    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.938186    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.938276    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.938370    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.938507    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.938641    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.938689    6731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:47:46.014680    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:47:46.014697    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:46.014833    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:46.014924    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:46.015010    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:46.015092    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:46.015215    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:46.015354    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:46.015366    6731 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:47:47.693084    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:47:47.693099    6731 machine.go:96] duration metric: took 13.258686385s to provisionDockerMachine
	I0819 10:47:47.693106    6731 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:47:47.693114    6731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:47:47.693124    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:47.693322    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:47:47.693338    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:47.693428    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:47.693543    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.693661    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:47.693761    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:47.738652    6731 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:47:47.742121    6731 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:47:47.742133    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:47:47.742223    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:47:47.742376    6731 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:47:47.742383    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:47:47.742539    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:47:47.750138    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:47.780304    6731 start.go:296] duration metric: took 87.187547ms for postStartSetup
	I0819 10:47:47.780325    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:47.780489    6731 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:47:47.780503    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:47.780584    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:47.780680    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.780768    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:47.780844    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:47.820828    6731 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:47:47.820883    6731 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:47:47.874212    6731 fix.go:56] duration metric: took 13.557703241s for fixHost
	I0819 10:47:47.874239    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:47.874390    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:47.874493    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.874580    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.874675    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:47.874801    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:47.874942    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:47.874950    6731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:47:47.939805    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089667.971112519
	
	I0819 10:47:47.939818    6731 fix.go:216] guest clock: 1724089667.971112519
	I0819 10:47:47.939826    6731 fix.go:229] Guest: 2024-08-19 10:47:47.971112519 -0700 PDT Remote: 2024-08-19 10:47:47.874228 -0700 PDT m=+34.922052537 (delta=96.884519ms)
	I0819 10:47:47.939836    6731 fix.go:200] guest clock delta is within tolerance: 96.884519ms
	I0819 10:47:47.939839    6731 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.623361057s
	I0819 10:47:47.939855    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:47.939978    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:47.963353    6731 out.go:177] * Found network options:
	I0819 10:47:47.984541    6731 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:47:48.006564    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:47:48.006602    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:48.007422    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:48.007661    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:48.007799    6731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:47:48.007841    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:47:48.007857    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:47:48.007960    6731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:47:48.007982    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:48.008073    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:48.008275    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:48.008303    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:48.008450    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:48.008512    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:48.008705    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:48.008702    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:48.008832    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:47:48.046347    6731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:47:48.046407    6731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:47:48.092373    6731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:47:48.092395    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:48.092498    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:48.108693    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:47:48.117700    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:47:48.126528    6731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:47:48.126570    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:47:48.135370    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:48.144295    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:47:48.153239    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:48.162188    6731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:47:48.171097    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:47:48.180126    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:47:48.188940    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:47:48.197810    6731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:47:48.205812    6731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:47:48.213773    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:48.325175    6731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:47:48.347923    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:48.347991    6731 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:47:48.361302    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:48.374626    6731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:47:48.389101    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:48.399756    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:48.409828    6731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:47:48.432006    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:48.442558    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:48.457632    6731 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:47:48.460581    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:47:48.467778    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:47:48.481436    6731 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:47:48.581769    6731 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:47:48.698298    6731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:47:48.698327    6731 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:47:48.712343    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:48.807611    6731 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:47:51.175487    6731 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367806337s)
	I0819 10:47:51.175551    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:47:51.185809    6731 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:47:51.199305    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:51.209999    6731 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:47:51.305659    6731 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:47:51.404114    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:51.515116    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:47:51.528971    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:51.540018    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:51.642211    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:47:51.708864    6731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:47:51.708942    6731 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:47:51.713456    6731 start.go:563] Will wait 60s for crictl version
	I0819 10:47:51.713510    6731 ssh_runner.go:195] Run: which crictl
	I0819 10:47:51.719286    6731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:47:51.744566    6731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:47:51.744636    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:51.762063    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:51.802673    6731 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:47:51.844258    6731 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:47:51.865266    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:51.865575    6731 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:47:51.869247    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:51.879589    6731 mustload.go:65] Loading cluster: ha-431000
	I0819 10:47:51.879763    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:51.879994    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:51.880010    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:51.889072    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52064
	I0819 10:47:51.889483    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:51.889854    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:51.889872    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:51.890119    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:51.890230    6731 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:47:51.890313    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:51.890398    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:47:51.891393    6731 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:47:51.891646    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:51.891661    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:51.900428    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52066
	I0819 10:47:51.900763    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:51.901079    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:51.901089    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:51.901317    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:51.901415    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:51.901514    6731 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:47:51.901521    6731 certs.go:194] generating shared ca certs ...
	I0819 10:47:51.901534    6731 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:51.901670    6731 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:47:51.901723    6731 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:47:51.901732    6731 certs.go:256] generating profile certs ...
	I0819 10:47:51.901831    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:47:51.901922    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.f69e9b91
	I0819 10:47:51.901978    6731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:47:51.901986    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:47:51.902006    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:47:51.902026    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:47:51.902044    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:47:51.902062    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:47:51.902080    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:47:51.902099    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:47:51.902116    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:47:51.902197    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:47:51.902236    6731 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:47:51.902244    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:47:51.902283    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:47:51.902314    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:47:51.902343    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:47:51.902410    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:51.902441    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:47:51.902461    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:47:51.902483    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:51.902508    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:51.902593    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:51.902677    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:51.902761    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:51.902837    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:51.926599    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:47:51.930274    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:47:51.938012    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:47:51.941060    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:47:51.948752    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:47:51.951705    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:47:51.959653    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:47:51.962721    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:47:51.971351    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:47:51.974362    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:47:51.982204    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:47:51.985240    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:47:51.993894    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:47:52.013902    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:47:52.033528    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:47:52.053096    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:47:52.072504    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 10:47:52.091757    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 10:47:52.110982    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:47:52.130616    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:47:52.150337    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:47:52.170242    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:47:52.189881    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:47:52.209131    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:47:52.222937    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:47:52.236606    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:47:52.250135    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:47:52.263801    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:47:52.277449    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:47:52.290914    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:47:52.304537    6731 ssh_runner.go:195] Run: openssl version
	I0819 10:47:52.308871    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:47:52.317959    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:47:52.321340    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:47:52.321374    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:47:52.325500    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:47:52.334569    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:47:52.343508    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:52.346908    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:52.346954    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:52.351191    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:47:52.360097    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:47:52.369144    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:47:52.372634    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:47:52.372668    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:47:52.377048    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:47:52.385997    6731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:47:52.389485    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 10:47:52.393773    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 10:47:52.398077    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 10:47:52.402284    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 10:47:52.406494    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 10:47:52.410784    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 10:47:52.415017    6731 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:47:52.415077    6731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:47:52.415094    6731 kube-vip.go:115] generating kube-vip config ...
	I0819 10:47:52.415128    6731 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:47:52.428484    6731 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:47:52.428533    6731 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:47:52.428584    6731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:47:52.436426    6731 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:47:52.436471    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:47:52.443594    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:47:52.457212    6731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:47:52.470304    6731 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:47:52.484055    6731 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:47:52.486893    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:52.496372    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:52.591931    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:47:52.607116    6731 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:47:52.607291    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:52.628710    6731 out.go:177] * Verifying Kubernetes components...
	I0819 10:47:52.670346    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:52.783782    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:47:52.798292    6731 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:52.798497    6731 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1139f2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:47:52.798536    6731 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:47:52.798707    6731 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:47:52.798781    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:47:52.798786    6731 round_trippers.go:469] Request Headers:
	I0819 10:47:52.798795    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:47:52.798799    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.294663    6731 round_trippers.go:574] Response Status: 200 OK in 8495 milliseconds
	I0819 10:48:01.295619    6731 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:48:01.295631    6731 node_ready.go:38] duration metric: took 8.496725269s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:48:01.295639    6731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:48:01.295675    6731 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:48:01.295684    6731 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:48:01.295719    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:01.295725    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.295731    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.295738    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.330440    6731 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0819 10:48:01.337354    6731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.337421    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:48:01.337427    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.337433    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.337437    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.341316    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.341771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.341778    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.341784    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.341787    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.348506    6731 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:48:01.348939    6731 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.348948    6731 pod_ready.go:82] duration metric: took 11.576417ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.348955    6731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.349002    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:48:01.349009    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.349018    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.349023    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.352838    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.353315    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.353323    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.353329    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.353332    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.359196    6731 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 10:48:01.359534    6731 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.359544    6731 pod_ready.go:82] duration metric: took 10.583164ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.359550    6731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.359593    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:48:01.359598    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.359606    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.359612    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.362788    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.363225    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.363232    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.363240    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.363244    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.367689    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:01.368075    6731 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.368086    6731 pod_ready.go:82] duration metric: took 8.530882ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.368092    6731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.368143    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:48:01.368148    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.368154    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.368159    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.371432    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.372034    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:01.372042    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.372047    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.372051    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.374444    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.374736    6731 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.374746    6731 pod_ready.go:82] duration metric: took 6.6473ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.374762    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.374802    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:48:01.374806    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.374812    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.374816    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.377666    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.497544    6731 request.go:632] Waited for 119.461544ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.497628    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.497639    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.497644    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.497657    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.500903    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.501455    6731 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.501465    6731 pod_ready.go:82] duration metric: took 126.694729ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.501472    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.696523    6731 request.go:632] Waited for 195.000548ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:48:01.696576    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:48:01.696581    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.696587    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.696591    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.699558    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.896265    6731 request.go:632] Waited for 196.197674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:01.896299    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:01.896306    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.896314    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.896318    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.898585    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.899021    6731 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.899030    6731 pod_ready.go:82] duration metric: took 397.544864ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.899037    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.096355    6731 request.go:632] Waited for 197.256376ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:48:02.096461    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:48:02.096473    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.096484    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.096492    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.100048    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:02.295872    6731 request.go:632] Waited for 195.092018ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:02.295923    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:02.295929    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.295935    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.295938    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.297901    6731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:48:02.298170    6731 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:02.298180    6731 pod_ready.go:82] duration metric: took 399.12914ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.298196    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.496479    6731 request.go:632] Waited for 198.200207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:48:02.496532    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:48:02.496579    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.496595    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.496601    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.500536    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:02.695959    6731 request.go:632] Waited for 194.694484ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:02.696038    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:02.696044    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.696053    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.696059    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.698693    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:02.699259    6731 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:02.699268    6731 pod_ready.go:82] duration metric: took 401.059351ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.699282    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2fn5w" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.895886    6731 request.go:632] Waited for 196.554773ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2fn5w
	I0819 10:48:02.895937    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2fn5w
	I0819 10:48:02.895943    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.895949    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.895952    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.898485    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:03.097015    6731 request.go:632] Waited for 197.927938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m04
	I0819 10:48:03.097110    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m04
	I0819 10:48:03.097121    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.097133    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.097139    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.100422    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:03.100848    6731 pod_ready.go:93] pod "kube-proxy-2fn5w" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:03.100861    6731 pod_ready.go:82] duration metric: took 401.564872ms for pod "kube-proxy-2fn5w" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.100870    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.297507    6731 request.go:632] Waited for 196.572896ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:48:03.297595    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:48:03.297605    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.297617    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.297628    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.300868    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:03.497170    6731 request.go:632] Waited for 195.491118ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:03.497222    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:03.497231    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.497243    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.497254    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.500591    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:03.501004    6731 pod_ready.go:98] node "ha-431000-m02" hosting pod "kube-proxy-5h7j2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:03.501017    6731 pod_ready.go:82] duration metric: took 400.132303ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	E0819 10:48:03.501025    6731 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-431000-m02" hosting pod "kube-proxy-5h7j2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:03.501032    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.696124    6731 request.go:632] Waited for 195.010851ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:48:03.696172    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:48:03.696179    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.696218    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.696226    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.699032    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:03.895964    6731 request.go:632] Waited for 196.576431ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:03.896021    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:03.896029    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.896037    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.896043    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.898534    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:03.898926    6731 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:03.898935    6731 pod_ready.go:82] duration metric: took 397.887553ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.898942    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:04.096184    6731 request.go:632] Waited for 197.190491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:48:04.096246    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:48:04.096256    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.096269    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.096277    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.099213    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:04.297318    6731 request.go:632] Waited for 197.526248ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:04.297394    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:04.297404    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.297415    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.297424    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.301350    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:04.301819    6731 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:04.301828    6731 pod_ready.go:82] duration metric: took 402.870121ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:04.301835    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:04.495992    6731 request.go:632] Waited for 194.108051ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:48:04.496068    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:48:04.496077    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.496087    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.496094    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.499407    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:04.696474    6731 request.go:632] Waited for 196.428196ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:04.696569    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:04.696581    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.696595    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.696602    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.699405    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:04.699912    6731 pod_ready.go:98] node "ha-431000-m02" hosting pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:04.699926    6731 pod_ready.go:82] duration metric: took 398.076795ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	E0819 10:48:04.699934    6731 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-431000-m02" hosting pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:04.699945    6731 pod_ready.go:39] duration metric: took 3.404223088s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:48:04.699963    6731 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:48:04.700028    6731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:48:04.711937    6731 api_server.go:72] duration metric: took 12.104535169s to wait for apiserver process to appear ...
	I0819 10:48:04.711948    6731 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:48:04.711964    6731 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:48:04.714976    6731 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:48:04.715016    6731 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:48:04.715022    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.715028    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.715032    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.715515    6731 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:48:04.715659    6731 api_server.go:141] control plane version: v1.31.0
	I0819 10:48:04.715671    6731 api_server.go:131] duration metric: took 3.718718ms to wait for apiserver health ...
	I0819 10:48:04.715676    6731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:48:04.896062    6731 request.go:632] Waited for 180.330037ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:04.896138    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:04.896149    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.896159    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.896167    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.900885    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:04.904876    6731 system_pods.go:59] 19 kube-system pods found
	I0819 10:48:04.904891    6731 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:48:04.904896    6731 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:48:04.904899    6731 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:48:04.904902    6731 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:48:04.904905    6731 system_pods.go:61] "kindnet-kcrzx" [4d8e74ea-456c-476b-951f-c880eb642788] Running
	I0819 10:48:04.904908    6731 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:48:04.904911    6731 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:48:04.904914    6731 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:48:04.904916    6731 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:48:04.904919    6731 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:48:04.904922    6731 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:48:04.904925    6731 system_pods.go:61] "kube-proxy-2fn5w" [bca1b722-fe85-4f4b-a536-8228357812a4] Running
	I0819 10:48:04.904927    6731 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:48:04.904930    6731 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:48:04.904933    6731 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:48:04.904935    6731 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:48:04.904938    6731 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:48:04.904940    6731 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:48:04.904955    6731 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:48:04.904964    6731 system_pods.go:74] duration metric: took 189.278663ms to wait for pod list to return data ...
	I0819 10:48:04.904971    6731 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:48:05.096767    6731 request.go:632] Waited for 191.735215ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:48:05.096807    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:48:05.096813    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:05.096824    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:05.096848    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:05.099644    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:05.099783    6731 default_sa.go:45] found service account: "default"
	I0819 10:48:05.099793    6731 default_sa.go:55] duration metric: took 194.813501ms for default service account to be created ...
	I0819 10:48:05.099798    6731 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:48:05.296235    6731 request.go:632] Waited for 196.389305ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:05.296338    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:05.296351    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:05.296362    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:05.296370    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:05.300491    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:05.304610    6731 system_pods.go:86] 19 kube-system pods found
	I0819 10:48:05.304622    6731 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:48:05.304626    6731 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:48:05.304629    6731 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:48:05.304631    6731 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:48:05.304634    6731 system_pods.go:89] "kindnet-kcrzx" [4d8e74ea-456c-476b-951f-c880eb642788] Running
	I0819 10:48:05.304636    6731 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:48:05.304639    6731 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:48:05.304641    6731 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:48:05.304644    6731 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:48:05.304646    6731 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:48:05.304652    6731 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:48:05.304655    6731 system_pods.go:89] "kube-proxy-2fn5w" [bca1b722-fe85-4f4b-a536-8228357812a4] Running
	I0819 10:48:05.304658    6731 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:48:05.304660    6731 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:48:05.304663    6731 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:48:05.304666    6731 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:48:05.304670    6731 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:48:05.304673    6731 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:48:05.304675    6731 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:48:05.304679    6731 system_pods.go:126] duration metric: took 204.873114ms to wait for k8s-apps to be running ...
	I0819 10:48:05.304689    6731 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:48:05.304743    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:48:05.315748    6731 system_svc.go:56] duration metric: took 11.056169ms WaitForService to wait for kubelet
	I0819 10:48:05.315761    6731 kubeadm.go:582] duration metric: took 12.708349079s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:48:05.315777    6731 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:48:05.496283    6731 request.go:632] Waited for 180.435074ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:48:05.496409    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:48:05.496422    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:05.496434    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:05.496442    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:05.500479    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:05.501183    6731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:48:05.501199    6731 node_conditions.go:123] node cpu capacity is 2
	I0819 10:48:05.501209    6731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:48:05.501213    6731 node_conditions.go:123] node cpu capacity is 2
	I0819 10:48:05.501217    6731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:48:05.501220    6731 node_conditions.go:123] node cpu capacity is 2
	I0819 10:48:05.501224    6731 node_conditions.go:105] duration metric: took 185.438997ms to run NodePressure ...
	I0819 10:48:05.501232    6731 start.go:241] waiting for startup goroutines ...
	I0819 10:48:05.501250    6731 start.go:255] writing updated cluster config ...
	I0819 10:48:05.523466    6731 out.go:201] 
	I0819 10:48:05.560623    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:05.560698    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:48:05.598433    6731 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:48:05.673302    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:48:05.673330    6731 cache.go:56] Caching tarball of preloaded images
	I0819 10:48:05.673481    6731 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:48:05.673495    6731 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:48:05.673583    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:48:05.674126    6731 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:48:05.674196    6731 start.go:364] duration metric: took 53.173µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:48:05.674214    6731 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:48:05.674220    6731 fix.go:54] fixHost starting: m03
	I0819 10:48:05.674532    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:48:05.674564    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:48:05.684031    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52071
	I0819 10:48:05.684387    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:48:05.684730    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:48:05.684748    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:48:05.684970    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:48:05.685096    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:05.685184    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:48:05.685314    6731 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:05.685417    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:48:05.686356    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid 4921 missing from process table
	I0819 10:48:05.686393    6731 fix.go:112] recreateIfNeeded on ha-431000-m03: state=Stopped err=<nil>
	I0819 10:48:05.686403    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	W0819 10:48:05.686488    6731 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:48:05.707556    6731 out.go:177] * Restarting existing hyperkit VM for "ha-431000-m03" ...
	I0819 10:48:05.749205    6731 main.go:141] libmachine: (ha-431000-m03) Calling .Start
	I0819 10:48:05.749457    6731 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:05.749508    6731 main.go:141] libmachine: (ha-431000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:48:05.750891    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid 4921 missing from process table
	I0819 10:48:05.750907    6731 main.go:141] libmachine: (ha-431000-m03) DBG | pid 4921 is in state "Stopped"
	I0819 10:48:05.750937    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid...
	I0819 10:48:05.751980    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:48:05.783917    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:48:05.783944    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:48:05.784089    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00039adb0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:48:05.784126    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00039adb0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:48:05.784162    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:48:05.784200    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:48:05.784218    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:48:05.786149    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Pid is 6801
	I0819 10:48:05.786682    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:48:05.786725    6731 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:05.786782    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 6801
	I0819 10:48:05.789082    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:48:05.789187    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:48:05.789247    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d6bf}
	I0819 10:48:05.789282    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d6ab}
	I0819 10:48:05.789327    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:48:05.789331    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 10:48:05.789394    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:48:05.789432    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:48:05.789457    6731 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:48:05.790573    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:05.790831    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:48:05.791509    6731 machine.go:93] provisionDockerMachine start ...
	I0819 10:48:05.791526    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:05.791708    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:05.791856    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:05.791989    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:05.792106    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:05.792233    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:05.792391    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:05.792718    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:05.792736    6731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:48:05.795522    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:48:05.805645    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:48:05.807213    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:48:05.807239    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:48:05.807263    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:48:05.807280    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:48:06.196775    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:48:06.196792    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:48:06.311674    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:48:06.311699    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:48:06.311708    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:48:06.311716    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:48:06.312485    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:48:06.312497    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:48:11.891105    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:48:11.891118    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:48:11.891126    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:48:11.914412    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:48:40.850746    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:48:40.850774    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:48:40.850923    6731 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:48:40.850935    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:48:40.851109    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:40.851215    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:40.851319    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.851447    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.851565    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:40.851724    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:40.851884    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:40.851892    6731 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:48:40.912350    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:48:40.912364    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:40.912505    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:40.912602    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.912691    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.912785    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:40.912908    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:40.913053    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:40.913064    6731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:48:40.968529    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:48:40.968544    6731 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:48:40.968564    6731 buildroot.go:174] setting up certificates
	I0819 10:48:40.968572    6731 provision.go:84] configureAuth start
	I0819 10:48:40.968583    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:48:40.968727    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:40.968824    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:40.968927    6731 provision.go:143] copyHostCerts
	I0819 10:48:40.968955    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:48:40.969005    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:48:40.969014    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:48:40.969148    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:48:40.969352    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:48:40.969382    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:48:40.969386    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:48:40.969454    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:48:40.969597    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:48:40.969626    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:48:40.969631    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:48:40.969728    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:48:40.969875    6731 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:48:41.057829    6731 provision.go:177] copyRemoteCerts
	I0819 10:48:41.057874    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:48:41.057888    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.058026    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.058130    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.058224    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.058305    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:41.091148    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:48:41.091220    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:48:41.111177    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:48:41.111249    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:48:41.131169    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:48:41.131232    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:48:41.150507    6731 provision.go:87] duration metric: took 181.923979ms to configureAuth
	I0819 10:48:41.150522    6731 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:48:41.150698    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:41.150712    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:41.150863    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.150946    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.151038    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.151126    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.151222    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.151342    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:41.151471    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:41.151478    6731 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:48:41.202400    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:48:41.202413    6731 buildroot.go:70] root file system type: tmpfs
	I0819 10:48:41.202505    6731 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:48:41.202518    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.202705    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.202819    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.202905    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.202997    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.203153    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:41.203294    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:41.203341    6731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:48:41.264039    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:48:41.264057    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.264193    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.264267    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.264354    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.264447    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.264565    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:41.264712    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:41.264724    6731 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:48:42.813749    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:48:42.813763    6731 machine.go:96] duration metric: took 37.021449642s to provisionDockerMachine
	I0819 10:48:42.813771    6731 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:48:42.813778    6731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:48:42.813796    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:42.813978    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:48:42.813990    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:42.814079    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:42.814168    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.814251    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:42.814339    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:42.847285    6731 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:48:42.850702    6731 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:48:42.850716    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:48:42.850802    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:48:42.850961    6731 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:48:42.850968    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:48:42.851143    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:48:42.859533    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:48:42.879757    6731 start.go:296] duration metric: took 65.975651ms for postStartSetup
	I0819 10:48:42.879780    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:42.879958    6731 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:48:42.879970    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:42.880059    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:42.880147    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.880225    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:42.880299    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:42.912892    6731 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:48:42.912952    6731 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:48:42.966028    6731 fix.go:56] duration metric: took 37.291003007s for fixHost
	I0819 10:48:42.966067    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:42.966300    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:42.966470    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.966677    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.966842    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:42.967014    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:42.967198    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:42.967209    6731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:48:43.017214    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089722.809914885
	
	I0819 10:48:43.017227    6731 fix.go:216] guest clock: 1724089722.809914885
	I0819 10:48:43.017238    6731 fix.go:229] Guest: 2024-08-19 10:48:42.809914885 -0700 PDT Remote: 2024-08-19 10:48:42.966051 -0700 PDT m=+90.012694037 (delta=-156.136115ms)
	I0819 10:48:43.017249    6731 fix.go:200] guest clock delta is within tolerance: -156.136115ms
	I0819 10:48:43.017253    6731 start.go:83] releasing machines lock for "ha-431000-m03", held for 37.342247723s
	I0819 10:48:43.017267    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.017412    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:43.053981    6731 out.go:177] * Found network options:
	I0819 10:48:43.129066    6731 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:48:43.183072    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:48:43.183105    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:48:43.183124    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.183855    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.184015    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.184100    6731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:48:43.184137    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:48:43.184239    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:48:43.184256    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:48:43.184293    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:43.184321    6731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:48:43.184333    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:43.184497    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:43.184513    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:43.184663    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:43.184689    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:43.184810    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:43.184822    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:43.184959    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:48:43.213583    6731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:48:43.213642    6731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:48:43.260969    6731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:48:43.260991    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:48:43.261093    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:48:43.276683    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:48:43.284995    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:48:43.293374    6731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:48:43.293418    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:48:43.301652    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:48:43.309897    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:48:43.318705    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:48:43.326972    6731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:48:43.335390    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:48:43.343887    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:48:43.352357    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:48:43.360984    6731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:48:43.368494    6731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:48:43.376120    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:43.467265    6731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:48:43.484775    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:48:43.484846    6731 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:48:43.497091    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:48:43.508193    6731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:48:43.523755    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:48:43.534687    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:48:43.544926    6731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:48:43.565401    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:48:43.578088    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:48:43.593104    6731 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:48:43.595950    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:48:43.603348    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:48:43.617225    6731 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:48:43.708564    6731 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:48:43.826974    6731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:48:43.827000    6731 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:48:43.840921    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:43.931831    6731 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:48:46.156257    6731 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.224358944s)
	I0819 10:48:46.156321    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:48:46.167537    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:48:46.177508    6731 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:48:46.275371    6731 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:48:46.384348    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:46.481007    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:48:46.494577    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:48:46.505747    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:46.597531    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:48:46.653351    6731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:48:46.653427    6731 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:48:46.657670    6731 start.go:563] Will wait 60s for crictl version
	I0819 10:48:46.657717    6731 ssh_runner.go:195] Run: which crictl
	I0819 10:48:46.660938    6731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:48:46.686761    6731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:48:46.686832    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:48:46.704526    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:48:46.743134    6731 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:48:46.784818    6731 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:48:46.805951    6731 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0819 10:48:46.827168    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:46.827576    6731 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:48:46.832299    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:48:46.842314    6731 mustload.go:65] Loading cluster: ha-431000
	I0819 10:48:46.842487    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:46.842703    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:48:46.842725    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:48:46.851523    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52093
	I0819 10:48:46.851853    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:48:46.852189    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:48:46.852199    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:48:46.852392    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:48:46.852498    6731 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:48:46.852572    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:46.852653    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:48:46.853627    6731 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:48:46.853864    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:48:46.853886    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:48:46.862538    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52095
	I0819 10:48:46.862891    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:48:46.863218    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:48:46.863228    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:48:46.863493    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:48:46.863609    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:48:46.863718    6731 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.7
	I0819 10:48:46.863725    6731 certs.go:194] generating shared ca certs ...
	I0819 10:48:46.863739    6731 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:46.863891    6731 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:48:46.863952    6731 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:48:46.863961    6731 certs.go:256] generating profile certs ...
	I0819 10:48:46.864059    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:48:46.864084    6731 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc
	I0819 10:48:46.864099    6731 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0819 10:48:47.115702    6731 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc ...
	I0819 10:48:47.115728    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc: {Name:mk546bf47d8f9536a5f5b6d4554be985cbd51530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:47.116053    6731 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc ...
	I0819 10:48:47.116065    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc: {Name:mk7e6a2c85fe835844cf7f3435ab2787264953bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:47.116272    6731 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:48:47.116477    6731 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:48:47.116689    6731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:48:47.116699    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:48:47.116720    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:48:47.116739    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:48:47.116757    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:48:47.116776    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:48:47.116795    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:48:47.116812    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:48:47.116829    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:48:47.116905    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:48:47.116938    6731 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:48:47.116947    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:48:47.116979    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:48:47.117007    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:48:47.117035    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:48:47.117102    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:48:47.117135    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.117157    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.117176    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.117208    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:48:47.117346    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:48:47.117436    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:48:47.117536    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:48:47.117615    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:48:47.142966    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:48:47.147073    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:48:47.155318    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:48:47.158461    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:48:47.166659    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:48:47.169909    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:48:47.178109    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:48:47.181265    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:48:47.189483    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:48:47.192613    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:48:47.201555    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:48:47.205119    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:48:47.213152    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:48:47.233357    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:48:47.253373    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:48:47.273621    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:48:47.293620    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 10:48:47.313508    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 10:48:47.333626    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:48:47.353462    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:48:47.373370    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:48:47.393215    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:48:47.412732    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:48:47.432601    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:48:47.446319    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:48:47.460225    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:48:47.473780    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:48:47.487357    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:48:47.501097    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:48:47.514700    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:48:47.528522    6731 ssh_runner.go:195] Run: openssl version
	I0819 10:48:47.532949    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:48:47.541688    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.545076    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.545117    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.549433    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:48:47.558033    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:48:47.566686    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.570522    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.570574    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.574909    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:48:47.583535    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:48:47.592184    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.595867    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.595904    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.600346    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:48:47.609333    6731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:48:47.612588    6731 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:48:47.612626    6731 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0819 10:48:47.612672    6731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:48:47.612693    6731 kube-vip.go:115] generating kube-vip config ...
	I0819 10:48:47.612723    6731 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:48:47.627870    6731 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:48:47.627924    6731 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:48:47.627976    6731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:48:47.636973    6731 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:48:47.637024    6731 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:48:47.646020    6731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 10:48:47.646020    6731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 10:48:47.646020    6731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 10:48:47.646038    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:48:47.646059    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:48:47.646062    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:48:47.646121    6731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:48:47.646172    6731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:48:47.660116    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:48:47.660157    6731 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:48:47.660183    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:48:47.660208    6731 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:48:47.660226    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:48:47.660248    6731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:48:47.673769    6731 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:48:47.673805    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:48:48.141691    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:48:48.149459    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:48:48.162963    6731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:48:48.176379    6731 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:48:48.189896    6731 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:48:48.192847    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:48:48.202768    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:48.297576    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:48:48.315324    6731 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:48:48.315508    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:48.336018    6731 out.go:177] * Verifying Kubernetes components...
	I0819 10:48:48.356514    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:48.452232    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:48:49.049566    6731 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:48:49.049773    6731 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1139f2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:48:49.049811    6731 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:48:49.049986    6731 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m03" to be "Ready" ...
	I0819 10:48:49.050026    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:49.050031    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:49.050044    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:49.050049    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:49.052182    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:49.550380    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:49.550401    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:49.550412    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:49.550420    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:49.553469    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:50.050836    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:50.050856    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:50.050867    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:50.050872    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:50.053828    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:50.551275    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:50.551290    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:50.551297    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:50.551299    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:50.553247    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:48:51.051126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:51.051149    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:51.051161    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:51.051169    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:51.054487    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:51.054565    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:51.550751    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:51.550764    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:51.550770    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:51.550773    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:51.554094    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:52.051808    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:52.051848    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:52.051857    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:52.051864    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:52.054405    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:52.551111    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:52.551135    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:52.551147    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:52.551153    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:52.554177    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:53.050562    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:53.050577    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:53.050584    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:53.050587    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:53.052361    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:48:53.550771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:53.550787    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:53.550794    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:53.550798    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:53.553283    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:53.553380    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:54.051356    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:54.051428    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:54.051441    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:54.051447    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:54.054348    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:54.551004    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:54.551020    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:54.551026    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:54.551030    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:54.553045    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:55.051095    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:55.051142    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:55.051152    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:55.051157    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:55.053428    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:55.550441    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:55.550460    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:55.550470    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:55.550475    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:55.553606    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:55.553707    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:56.050952    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:56.050966    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:56.050973    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:56.050976    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:56.052832    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:48:56.551392    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:56.551413    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:56.551441    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:56.551446    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:56.553734    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:57.051356    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:57.051377    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:57.051388    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:57.051396    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:57.054556    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:57.551010    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:57.551030    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:57.551041    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:57.551047    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:57.553839    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:57.553945    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:58.050877    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:58.050892    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:58.050900    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:58.050903    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:58.053207    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:58.551669    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:58.551688    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:58.551699    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:58.551707    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:58.554730    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:59.050796    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:59.050819    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:59.050830    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:59.050835    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:59.054088    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:59.550718    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:59.550737    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:59.550749    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:59.550756    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:59.553970    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:59.554048    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:00.052097    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:00.052120    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:00.052167    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:00.052198    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:00.055063    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:00.550744    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:00.550766    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:00.550776    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:00.550782    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:00.553834    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:01.051854    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:01.051873    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:01.051885    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:01.051892    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:01.055031    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:01.551302    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:01.551323    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:01.551335    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:01.551343    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:01.554596    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:01.554668    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:02.050920    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:02.050940    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:02.050958    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:02.050975    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:02.053736    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:02.552196    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:02.552230    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:02.552237    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:02.552240    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:02.554641    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:03.050838    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:03.050857    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:03.050868    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:03.050873    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:03.054125    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:03.550771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:03.550785    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:03.550794    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:03.550798    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:03.552910    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:04.052575    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:04.052595    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:04.052607    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:04.052621    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:04.055636    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:04.055705    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:04.552223    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:04.552242    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:04.552253    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:04.552259    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:04.555524    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:05.052550    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:05.052574    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:05.052588    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:05.052610    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:05.054909    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:05.552550    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:05.552568    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:05.552577    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:05.552581    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:05.556192    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:06.051290    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:06.051305    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:06.051311    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:06.051315    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:06.052929    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:06.550946    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:06.550969    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:06.550981    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:06.550989    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:06.565463    6731 round_trippers.go:574] Response Status: 404 Not Found in 14 milliseconds
	I0819 10:49:06.565539    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:07.051724    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:07.051792    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:07.051806    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:07.051822    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:07.054638    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:07.552559    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:07.552575    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:07.552583    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:07.552587    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:07.558906    6731 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0819 10:49:08.051983    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:08.052011    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:08.052048    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:08.052057    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:08.055151    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:08.550667    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:08.550693    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:08.550735    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:08.550750    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:08.553804    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:09.052706    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:09.052731    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:09.052776    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:09.052784    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:09.055712    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:09.055781    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:09.551599    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:09.551615    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:09.551624    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:09.551630    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:09.555183    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:10.050631    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:10.050657    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:10.050669    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:10.050674    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:10.054985    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:49:10.551126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:10.551137    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:10.551143    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:10.551146    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:10.553249    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:11.052626    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:11.052644    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:11.052651    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:11.052656    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:11.055384    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:11.550711    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:11.550725    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:11.550729    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:11.550733    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:11.554398    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:11.554509    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:12.051859    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:12.051884    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:12.051924    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:12.051934    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:12.055082    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:12.551161    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:12.551173    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:12.551179    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:12.551183    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:12.553279    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:13.051549    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:13.051610    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:13.051621    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:13.051628    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:13.054867    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:13.551864    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:13.551878    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:13.551884    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:13.551889    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:13.555066    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:13.555140    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:14.052199    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:14.052217    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:14.052223    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:14.052226    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:14.054562    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:14.551764    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:14.551790    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:14.551801    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:14.551807    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:14.555310    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:15.052223    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:15.052279    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:15.052293    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:15.052299    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:15.055796    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:15.550718    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:15.550733    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:15.550759    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:15.550766    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:15.554217    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:16.052643    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:16.052670    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:16.052716    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:16.052724    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:16.056008    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:16.056083    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:16.551933    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:16.551956    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:16.551968    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:16.551974    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:16.555280    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:17.051987    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:17.052008    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:17.052018    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:17.052025    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:17.055318    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:17.551734    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:17.551746    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:17.551751    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:17.551754    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:17.553654    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:18.050867    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:18.050886    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:18.050899    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:18.050904    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:18.053425    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:18.551523    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:18.551543    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:18.551551    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:18.551557    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:18.554279    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:18.554345    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:19.051204    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:19.051234    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:19.051246    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:19.051252    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:19.054668    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:19.552430    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:19.552449    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:19.552455    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:19.552460    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:19.554479    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:20.050892    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:20.050918    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:20.050930    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:20.050943    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:20.054172    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:20.552143    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:20.552182    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:20.552192    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:20.552198    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:20.554611    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:20.554681    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:21.051321    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:21.051347    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:21.051390    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:21.051401    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:21.054431    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:21.552828    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:21.552891    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:21.552901    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:21.552906    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:21.555366    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:22.051105    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:22.051128    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:22.051140    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:22.051146    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:22.054457    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:22.551053    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:22.551070    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:22.551078    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:22.551081    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:22.553091    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:23.051049    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:23.051073    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:23.051085    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:23.051092    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:23.054116    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:23.054269    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:23.551400    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:23.551419    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:23.551427    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:23.551429    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:23.556948    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:49:24.051531    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:24.051549    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:24.051561    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:24.051569    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:24.054942    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:24.551524    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:24.551548    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:24.551559    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:24.551565    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:24.554301    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:25.050993    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:25.051013    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:25.051022    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:25.051026    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:25.053462    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:25.551254    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:25.551269    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:25.551277    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:25.551283    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:25.553516    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:25.553584    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:26.051047    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:26.051070    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:26.051081    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:26.051095    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:26.053722    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:26.552294    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:26.552315    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:26.552326    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:26.552333    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:26.555323    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:27.051500    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:27.051522    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:27.051570    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:27.051580    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:27.054761    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:27.552023    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:27.552067    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:27.552074    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:27.552076    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:27.554045    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:27.554105    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:28.051012    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:28.051068    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:28.051080    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:28.051091    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:28.053095    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:28.553091    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:28.553112    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:28.553123    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:28.553130    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:28.556091    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:29.051557    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:29.051582    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:29.051593    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:29.051606    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:29.055042    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:29.551292    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:29.551307    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:29.551313    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:29.551315    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:29.553314    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:30.051884    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:30.051917    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:30.051955    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:30.051962    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:30.055200    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:30.055279    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:30.551827    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:30.551854    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:30.551865    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:30.551873    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:30.555019    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:31.051813    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:31.051841    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:31.051852    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:31.051859    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:31.054944    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:31.551163    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:31.551184    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:31.551194    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:31.551200    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:31.553888    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:32.051783    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:32.051819    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:32.051832    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:32.051840    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:32.054547    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:32.552296    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:32.552350    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:32.552364    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:32.552371    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:32.555225    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:32.555300    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:33.052924    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:33.052939    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:33.052947    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:33.052952    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:33.054987    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:33.551522    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:33.551541    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:33.551549    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:33.551553    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:33.554655    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:34.052385    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:34.052434    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:34.052446    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:34.052454    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:34.055087    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:34.551264    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:34.551281    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:34.551289    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:34.551294    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:34.553737    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:35.051346    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:35.051367    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:35.051378    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:35.051386    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:35.054339    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:35.054443    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:35.552208    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:35.552226    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:35.552233    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:35.552237    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:35.554511    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:36.051189    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:36.051204    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:36.051212    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:36.051216    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:36.053190    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:36.553334    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:36.553356    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:36.553368    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:36.553374    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:36.556524    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:37.052539    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:37.052561    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:37.052573    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:37.052580    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:37.055836    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:37.055914    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:37.553023    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:37.553043    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:37.553053    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:37.553059    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:37.556810    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:38.051735    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:38.051757    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:38.051774    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:38.051782    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:38.055061    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:38.552449    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:38.552476    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:38.552487    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:38.552492    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:38.555685    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:39.051387    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:39.051409    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:39.051420    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:39.051425    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:39.054522    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:39.552260    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:39.552285    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:39.552298    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:39.552304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:39.555403    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:39.555495    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:40.051243    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:40.051310    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:40.051324    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:40.051331    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:40.054070    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:40.551873    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:40.551898    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:40.551960    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:40.551969    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:40.554968    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:41.051578    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:41.051606    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:41.051618    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:41.051623    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:41.054807    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:41.551916    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:41.551931    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:41.551943    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:41.551947    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:41.554367    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:42.053217    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:42.053241    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:42.053249    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:42.053255    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:42.056808    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:42.056893    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:42.552774    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:42.552803    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:42.552822    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:42.552882    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:42.556248    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:43.051301    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:43.051316    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:43.051322    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:43.051328    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:43.054036    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:43.553401    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:43.553423    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:43.553434    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:43.553471    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:43.557035    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:44.053457    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:44.053478    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:44.053489    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:44.053496    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:44.056841    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:44.551566    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:44.551590    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:44.551603    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:44.551609    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:44.555416    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:44.555493    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:45.051853    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:45.051879    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:45.051888    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:45.051895    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:45.055040    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:45.553444    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:45.553468    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:45.553515    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:45.553526    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:45.556794    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:46.051786    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:46.051806    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:46.051814    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:46.051832    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:46.053901    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:46.552785    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:46.552817    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:46.552830    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:46.552836    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:46.556083    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:46.556162    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:47.053456    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:47.053482    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:47.053494    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:47.053502    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:47.057009    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:47.553130    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:47.553152    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:47.553164    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:47.553174    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:47.559073    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:49:48.053108    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:48.053134    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:48.053145    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:48.053152    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:48.057067    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:48.552706    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:48.552729    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:48.552739    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:48.552747    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:48.556474    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:48.556559    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:49.051602    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:49.051625    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:49.051637    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:49.051646    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:49.054881    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:49.552627    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:49.552655    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:49.552667    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:49.552674    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:49.556037    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:50.052601    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:50.052618    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:50.052626    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:50.052631    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:50.055469    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:50.552155    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:50.552178    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:50.552190    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:50.552195    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:50.555596    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:51.052878    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:51.052905    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:51.052917    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:51.052922    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:51.056451    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:51.056532    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:51.552110    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:51.552139    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:51.552185    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:51.552195    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:51.555342    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:52.051920    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:52.051944    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:52.051961    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:52.051973    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:52.055723    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:52.551716    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:52.551743    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:52.551753    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:52.551790    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:52.554933    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:53.051908    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:53.051920    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:53.051926    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:53.051930    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:53.053756    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:53.552282    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:53.552329    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:53.552340    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:53.552346    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:53.554573    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:53.554662    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:54.052641    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:54.052700    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:54.052714    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:54.052724    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:54.055914    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:54.553424    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:54.553444    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:54.553453    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:54.553461    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:54.556331    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:55.052118    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:55.052139    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:55.052150    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:55.052156    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:55.055406    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:55.552115    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:55.552140    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:55.552153    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:55.552159    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:55.555054    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:55.555134    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:56.053229    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:56.053253    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:56.053266    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:56.053274    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:56.056807    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:56.552807    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:56.552829    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:56.552841    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:56.552851    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:56.556291    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:57.052874    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:57.052896    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:57.052908    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:57.052913    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:57.056108    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:57.553670    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:57.553697    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:57.553745    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:57.553758    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:57.557263    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:57.557331    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:58.051791    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:58.051817    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:58.051828    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:58.051833    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:58.055250    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:58.552518    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:58.552545    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:58.552556    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:58.552562    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:58.555625    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:59.053863    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:59.053885    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:59.053905    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:59.053914    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:59.057121    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:59.553259    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:59.553272    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:59.553278    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:59.553280    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:59.555213    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:00.052041    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:00.052090    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:00.052103    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:00.052110    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:00.054860    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:00.054945    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:00.552587    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:00.552608    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:00.552620    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:00.552626    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:00.555838    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:01.052694    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:01.052721    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:01.052732    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:01.052746    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:01.056070    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:01.553816    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:01.553839    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:01.553855    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:01.553865    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:01.557015    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:02.051783    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:02.051804    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:02.051815    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:02.051821    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:02.055085    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:02.055158    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:02.553062    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:02.553085    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:02.553097    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:02.553105    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:02.556329    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:03.052789    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:03.052811    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:03.052822    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:03.052827    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:03.055899    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:03.553258    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:03.553318    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:03.553331    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:03.553342    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:03.556755    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:04.052379    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:04.052401    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:04.052413    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:04.052420    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:04.056086    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:04.056163    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:04.552058    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:04.552079    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:04.552090    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:04.552097    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:04.554885    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:05.052906    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:05.052929    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:05.052942    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:05.052950    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:05.056201    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:05.551940    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:05.551961    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:05.551987    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:05.552004    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:05.554036    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:06.052760    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:06.052792    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:06.052801    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:06.052805    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:06.055319    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:06.551983    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:06.552008    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:06.552043    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:06.552063    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:06.554797    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:06.554875    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:07.052461    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:07.052481    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:07.052493    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:07.052501    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:07.055206    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:07.553476    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:07.553503    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:07.553555    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:07.553574    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:07.556741    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:08.052214    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:08.052241    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:08.052252    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:08.052258    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:08.055720    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:08.552079    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:08.552098    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:08.552110    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:08.552119    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:08.554790    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:09.054011    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:09.054033    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:09.054043    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:09.054051    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:09.057425    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:09.057563    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:09.553004    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:09.553024    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:09.553034    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:09.553042    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:09.556104    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:10.052832    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:10.052860    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:10.052870    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:10.052878    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:10.060001    6731 round_trippers.go:574] Response Status: 404 Not Found in 7 milliseconds
	I0819 10:50:10.553943    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:10.553967    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:10.553979    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:10.553984    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:10.557026    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:11.052217    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:11.052240    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:11.052251    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:11.052259    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:11.055611    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:11.553180    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:11.553218    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:11.553231    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:11.553237    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:11.556609    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:11.556679    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:12.053209    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:12.053234    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:12.053244    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:12.053260    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:12.056483    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:12.552948    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:12.552974    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:12.553016    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:12.553022    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:12.555995    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:13.054040    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:13.054066    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:13.054078    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:13.054086    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:13.057218    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:13.553331    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:13.553409    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:13.553428    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:13.553434    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:13.556700    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:13.557047    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:14.053359    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:14.053404    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:14.053418    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:14.053425    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:14.056093    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:14.554003    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:14.554020    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:14.554028    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:14.554033    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:14.556621    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:15.052240    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:15.052259    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:15.052267    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:15.052271    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:15.054851    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:15.552210    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:15.552233    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:15.552292    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:15.552296    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:15.554673    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:16.052627    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:16.052651    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:16.052662    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:16.052669    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:16.055859    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:16.055916    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:16.553446    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:16.553469    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:16.553480    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:16.553487    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:16.556493    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:17.052642    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:17.052665    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:17.052676    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:17.052684    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:17.055560    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:17.553327    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:17.553367    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:17.553375    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:17.553380    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:17.555848    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:18.054167    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:18.054195    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:18.054206    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:18.054214    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:18.057363    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:18.057447    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:18.552623    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:18.552664    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:18.552674    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:18.552682    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:18.556056    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:19.052692    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:19.052730    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:19.052738    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:19.052743    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:19.055382    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:19.553527    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:19.553553    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:19.553564    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:19.553602    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:19.557189    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:20.052711    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:20.052733    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:20.052744    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:20.052752    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:20.056398    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:20.552175    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:20.552196    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:20.552209    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:20.552216    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:20.555567    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:20.555628    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:21.054191    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:21.054216    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:21.054227    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:21.054235    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:21.057762    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:21.552794    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:21.552815    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:21.552827    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:21.552832    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:21.556056    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:22.052279    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:22.052315    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:22.052328    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:22.052335    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:22.055613    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:22.553162    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:22.553188    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:22.553232    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:22.553252    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:22.556362    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:22.556431    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:23.054316    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:23.054338    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:23.054350    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:23.054356    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:23.057542    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:23.552232    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:23.552245    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:23.552272    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:23.552280    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:23.553967    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:24.054003    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:24.054026    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:24.054037    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:24.054045    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:24.057299    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:24.552432    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:24.552455    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:24.552469    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:24.552477    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:24.555494    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:25.053013    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:25.053035    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:25.053047    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:25.053052    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:25.056230    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:25.056306    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:25.552539    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:25.552565    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:25.552577    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:25.552615    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:25.555941    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:26.053283    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:26.053298    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:26.053304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:26.053308    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:26.055446    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:26.553408    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:26.553431    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:26.553443    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:26.553450    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:26.556711    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:27.052272    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:27.052292    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:27.052303    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:27.052309    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:27.055283    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:27.553300    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:27.553326    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:27.553337    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:27.553344    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:27.556249    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:27.556320    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:28.052328    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:28.052357    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:28.052369    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:28.052375    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:28.054916    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:28.554421    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:28.554442    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:28.554453    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:28.554461    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:28.557682    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:29.053409    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:29.053426    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:29.053434    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:29.053438    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:29.055745    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:29.552751    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:29.552764    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:29.552769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:29.552771    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:29.554734    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:30.052686    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:30.052706    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:30.052712    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:30.052717    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:30.056887    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:50:30.056971    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:30.552691    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:30.552714    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:30.552725    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:30.552731    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:30.555684    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:31.052415    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:31.052438    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:31.052450    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:31.052456    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:31.054776    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:31.552531    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:31.552556    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:31.552611    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:31.552622    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:31.555322    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:32.053314    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:32.053340    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:32.053351    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:32.053356    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:32.056305    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:32.553594    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:32.553614    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:32.553625    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:32.553632    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:32.556478    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:32.556594    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:33.053039    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:33.053056    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:33.053065    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:33.053071    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:33.055406    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:33.553287    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:33.553306    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:33.553317    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:33.553324    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:33.555646    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:34.053235    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:34.053254    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:34.053262    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:34.053268    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:34.055633    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:34.552665    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:34.552680    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:34.552689    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:34.552693    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:34.554960    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:35.052632    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:35.052653    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:35.052664    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:35.052669    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:35.055247    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:35.055326    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:35.553273    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:35.553297    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:35.553309    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:35.553316    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:35.556601    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:36.052771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:36.052791    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:36.052803    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:36.052809    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:36.056225    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:36.553576    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:36.553599    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:36.553611    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:36.553618    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:36.556923    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:37.052815    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:37.052842    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:37.052883    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:37.052890    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:37.055843    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:37.055915    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:37.554175    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:37.554196    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:37.554208    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:37.554215    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:37.557673    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:38.052621    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:38.052641    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:38.052652    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:38.052659    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:38.055675    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:38.554585    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:38.554641    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:38.554655    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:38.554663    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:38.558316    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:39.052502    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:39.052557    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:39.052585    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:39.052593    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:39.055843    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:39.553574    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:39.553601    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:39.553612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:39.553650    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:39.557016    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:39.557096    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:40.052628    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:40.052657    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:40.052695    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:40.052721    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:40.055547    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:40.553381    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:40.553406    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:40.553444    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:40.553450    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:40.556591    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:41.053865    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:41.053894    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:41.053906    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:41.053914    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:41.057267    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:41.553609    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:41.553633    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:41.553644    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:41.553652    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:41.556535    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:42.053547    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:42.053575    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:42.053585    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:42.053591    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:42.056838    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:42.056911    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:42.552950    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:42.552967    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:42.552975    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:42.552979    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:42.555606    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:43.054679    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:43.054705    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:43.054716    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:43.054723    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:43.057954    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:43.553147    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:43.553170    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:43.553180    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:43.553187    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:43.556659    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:44.052693    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:44.052712    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:44.052725    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:44.052731    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:44.055591    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:44.553352    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:44.553405    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:44.553418    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:44.553427    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:44.556267    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:44.556423    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:45.052819    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:45.052873    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:45.052887    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:45.052898    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:45.055681    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:45.553717    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:45.553743    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:45.553754    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:45.553760    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:45.557371    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:46.053721    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:46.053741    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:46.053750    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:46.053755    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:46.056953    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:46.554733    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:46.554759    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:46.554770    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:46.554776    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:46.557881    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:46.557956    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:47.053088    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:47.053114    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:47.053139    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:47.053178    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:47.057150    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:47.553469    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:47.553491    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:47.553503    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:47.553509    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:47.556795    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:48.053927    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:48.053949    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:48.053961    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:48.053967    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:48.057833    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:48.554794    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:48.554819    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:48.554829    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:48.554836    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:48.558066    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:48.558139    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:49.053347    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:49.053369    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:49.053380    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:49.053385    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:49.056191    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:49.552995    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:49.553017    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:49.553028    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:49.553035    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:49.556705    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:50.052811    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:50.052836    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:50.052848    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:50.052857    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:50.056125    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:50.553318    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:50.553336    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:50.553343    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:50.553348    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:50.555815    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:51.054852    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:51.054879    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:51.054922    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:51.054929    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:51.058448    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:51.058549    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:51.554735    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:51.554757    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:51.554769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:51.554777    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:51.558250    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:52.053837    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:52.053859    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:52.053871    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:52.053878    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:52.057090    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:52.553164    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:52.553185    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:52.553196    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:52.553203    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:52.556093    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:53.052774    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:53.052789    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:53.052796    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:53.052802    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:53.054809    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:53.553273    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:53.553289    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:53.553296    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:53.553300    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:53.555457    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:53.555522    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:54.054101    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:54.054116    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:54.054126    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:54.054130    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:54.056415    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:54.554015    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:54.554035    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:54.554045    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:54.554052    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:54.557294    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:55.053376    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:55.053396    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:55.053407    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:55.053412    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:55.056562    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:55.553034    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:55.553047    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:55.553054    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:55.553057    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:55.555385    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:56.053965    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:56.053990    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:56.054002    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:56.054007    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:56.057002    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:56.057072    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:56.554082    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:56.554107    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:56.554118    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:56.554125    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:56.557276    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:57.053741    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:57.053768    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:57.053780    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:57.053786    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:57.057162    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:57.554395    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:57.554421    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:57.554433    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:57.554440    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:57.557885    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:58.052984    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:58.052998    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:58.053006    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:58.053010    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:58.055164    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:58.553222    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:58.553241    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:58.553271    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:58.553276    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:58.555082    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:58.555137    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:59.054358    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:59.054380    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:59.054392    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:59.054413    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:59.058040    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:59.553380    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:59.553408    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:59.553419    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:59.553425    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:59.556014    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:00.053290    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:00.053308    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:00.053344    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:00.053349    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:00.055796    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:00.553346    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:00.553373    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:00.553384    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:00.553391    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:00.556794    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:00.556903    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:01.053146    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:01.053172    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:01.053215    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:01.053225    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:01.055877    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:01.553221    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:01.553247    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:01.553258    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:01.553265    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:01.556552    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:02.055126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:02.055160    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:02.055175    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:02.055184    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:02.058471    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:02.553937    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:02.553960    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:02.553970    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:02.553975    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:02.557401    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:02.557478    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:03.053784    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:03.053806    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:03.053857    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:03.053867    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:03.056959    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:03.553699    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:03.553755    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:03.553769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:03.553777    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:03.556657    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:04.055276    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:04.055300    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:04.055312    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:04.055319    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:04.058607    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:04.553743    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:04.553769    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:04.553780    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:04.553784    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:04.557143    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:05.054407    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:05.054427    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:05.054439    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:05.054452    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:05.057462    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:05.057531    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:05.554464    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:05.554485    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:05.554497    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:05.554502    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:05.557990    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:06.053104    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:06.053129    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:06.053141    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:06.053150    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:06.055868    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:06.553581    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:06.553600    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:06.553612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:06.553620    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:06.556556    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:07.053664    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:07.053686    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:07.053698    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:07.053708    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:07.057073    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:07.553166    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:07.553191    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:07.553203    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:07.553210    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:07.556450    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:07.556521    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:08.053159    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:08.053174    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:08.053183    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:08.053188    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:08.055328    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:08.553866    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:08.553892    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:08.553904    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:08.553912    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:08.556775    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:09.054290    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:09.054339    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:09.054352    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:09.054358    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:09.057196    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:09.554985    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:09.555010    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:09.555022    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:09.555027    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:09.558086    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:09.558151    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:10.054595    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:10.054620    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:10.054630    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:10.054636    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:10.057941    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:10.555296    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:10.555323    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:10.555373    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:10.555381    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:10.558254    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:11.054279    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:11.054304    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:11.054314    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:11.054320    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:11.057361    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:11.554127    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:11.554148    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:11.554159    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:11.554164    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:11.557132    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:12.053339    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:12.053363    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:12.053380    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:12.053386    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:12.055874    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:12.055948    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:12.555345    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:12.555364    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:12.555375    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:12.555384    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:12.558576    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:13.054454    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:13.054474    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:13.054485    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:13.054491    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:13.057567    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:13.553571    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:13.553591    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:13.553601    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:13.553606    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:13.556946    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:14.055315    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:14.055337    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:14.055348    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:14.055354    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:14.058746    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:14.058822    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:14.554232    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:14.554256    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:14.554267    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:14.554273    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:14.557669    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:15.054617    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:15.054652    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:15.054662    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:15.054668    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:15.057043    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:15.554967    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:15.554988    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:15.555000    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:15.555005    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:15.557951    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:16.054869    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:16.054894    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:16.054934    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:16.054942    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:16.057848    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:16.553740    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:16.553764    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:16.553803    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:16.553811    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:16.556855    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:16.556925    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:17.054370    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:17.054396    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:17.054407    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:17.054415    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:17.057649    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:17.554197    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:17.554250    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:17.554263    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:17.554272    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:17.556745    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:18.053431    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:18.053450    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:18.053461    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:18.053466    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:18.057060    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:18.554353    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:18.554367    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:18.554375    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:18.554381    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:18.556869    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:19.055419    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:19.055442    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:19.055458    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:19.055463    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:19.058903    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:19.059063    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:19.554833    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:19.554848    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:19.554854    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:19.554858    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:19.556762    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:20.054915    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:20.054936    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:20.054947    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:20.054953    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:20.057947    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:20.553863    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:20.553887    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:20.553899    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:20.553906    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:20.557142    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:21.055333    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:21.055359    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:21.055370    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:21.055376    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:21.058593    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:21.554854    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:21.554874    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:21.554885    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:21.554893    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:21.557756    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:21.557904    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:22.055272    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:22.055298    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:22.055309    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:22.055320    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:22.058761    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:22.554889    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:22.554913    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:22.554957    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:22.554966    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:22.557884    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:23.053593    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:23.053677    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:23.053684    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:23.053690    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:23.055671    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:23.554897    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:23.554915    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:23.554921    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:23.554925    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:23.556865    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:24.055573    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:24.055600    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:24.055612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:24.055621    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:24.058999    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:24.059072    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:24.554103    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:24.554125    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:24.554136    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:24.554143    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:24.557593    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:25.055623    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:25.055650    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:25.055661    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:25.055666    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:25.058974    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:25.554496    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:25.554516    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:25.554528    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:25.554533    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:25.557257    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:26.054612    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:26.054675    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:26.054682    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:26.054689    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:26.056656    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:26.554520    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:26.554539    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:26.554548    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:26.554552    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:26.556903    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:26.556961    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:27.055130    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:27.055156    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:27.055167    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:27.055175    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:27.058320    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:27.554836    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:27.554863    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:27.554872    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:27.554880    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:27.558351    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:28.055628    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:28.055651    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:28.055665    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:28.055671    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:28.058655    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:28.554813    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:28.554839    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:28.554852    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:28.554858    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:28.558122    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:28.558200    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:29.054994    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:29.055021    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:29.055062    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:29.055069    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:29.058014    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:29.554426    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:29.554442    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:29.554451    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:29.554455    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:29.556542    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:30.054152    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:30.054172    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:30.054182    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:30.054188    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:30.056862    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:30.554508    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:30.554519    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:30.554526    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:30.554529    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:30.556491    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:31.054836    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:31.054858    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:31.054869    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:31.054876    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:31.057795    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:31.057884    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:31.554037    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:31.554063    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:31.554075    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:31.554084    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:31.559945    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:51:32.054494    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:32.054513    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:32.054522    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:32.054525    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:32.056953    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:32.554097    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:32.554118    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:32.554130    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:32.554137    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:32.558190    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:51:33.054128    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:33.054153    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:33.054164    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:33.054170    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:33.056763    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:33.553714    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:33.553752    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:33.553760    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:33.553764    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:33.556405    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:33.556457    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:34.054545    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:34.054569    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:34.054617    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:34.054624    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:34.057511    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:34.554849    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:34.554871    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:34.554883    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:34.554888    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:34.558363    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:35.053988    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:35.054013    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:35.054024    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:35.054031    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:35.056770    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:35.554587    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:35.554609    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:35.554619    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:35.554625    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:35.557960    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:35.558034    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:36.054198    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:36.054222    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:36.054229    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:36.054232    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:36.055802    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:36.554404    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:36.554428    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:36.554440    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:36.554446    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:36.557090    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:37.054425    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:37.054479    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:37.054490    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:37.054498    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:37.057228    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:37.555500    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:37.555512    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:37.555518    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:37.555521    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:37.557601    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:38.053768    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:38.053782    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:38.053791    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:38.053795    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:38.056165    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:38.056257    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:38.554665    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:38.554676    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:38.554682    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:38.554685    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:38.557419    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:39.054356    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:39.054378    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:39.054389    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:39.054395    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:39.057852    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:39.554782    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:39.554836    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:39.554844    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:39.554848    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:39.557248    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:40.054272    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:40.054293    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:40.054304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:40.054310    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:40.056976    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:40.057062    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:40.555343    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:40.555383    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:40.555394    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:40.555400    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:40.557223    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:41.054729    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:41.054786    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:41.054799    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:41.054806    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:41.057633    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:41.554501    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:41.554567    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:41.554582    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:41.554591    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:41.557830    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:42.054529    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:42.054554    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:42.054563    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:42.054568    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:42.057815    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:42.057887    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:42.555358    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:42.555370    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:42.555377    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:42.555381    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:42.557069    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:43.055502    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:43.055544    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:43.055552    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:43.055560    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:43.057767    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:43.554618    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:43.554638    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:43.554685    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:43.554690    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:43.557317    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:44.054601    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:44.054620    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:44.054626    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:44.054630    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:44.056993    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:44.554782    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:44.554797    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:44.554806    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:44.554810    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:44.556419    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:44.556476    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:45.054525    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:45.054559    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:45.054596    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:45.054633    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:45.058027    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:45.554369    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:45.554385    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:45.554393    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:45.554397    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:45.556944    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:46.054888    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:46.054906    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:46.054915    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:46.054919    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:46.057107    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:46.554088    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:46.554113    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:46.554124    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:46.554130    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:46.557394    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:46.557468    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:47.054175    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:47.054197    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:47.054209    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:47.054217    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:47.057370    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:47.555569    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:47.555594    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:47.555647    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:47.555655    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:47.559047    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:48.055273    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:48.055289    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:48.055300    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:48.055311    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:48.057338    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:48.554690    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:48.554708    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:48.554718    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:48.554724    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:48.557402    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:49.054179    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:49.054233    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:49.054246    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:49.054253    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:49.056979    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:49.057112    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:49.555596    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:49.555619    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:49.555629    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:49.555633    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:49.558319    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:50.054126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:50.054150    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:50.054161    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:50.054168    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:50.057661    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:50.555084    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:50.555110    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:50.555124    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:50.555133    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:50.558415    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:51.054816    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:51.054839    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:51.054854    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:51.054860    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:51.058330    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:51.058413    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:51.554613    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:51.554634    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:51.554645    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:51.554652    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:51.557804    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:52.054564    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:52.054619    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:52.054632    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:52.054638    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:52.057826    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:52.555343    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:52.555366    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:52.555378    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:52.555385    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:52.558107    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:53.055011    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:53.055025    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:53.055034    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:53.055037    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:53.057184    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:53.555329    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:53.555354    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:53.555366    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:53.555372    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:53.558170    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:53.558239    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:54.054793    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:54.054810    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:54.054818    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:54.054823    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:54.057650    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:54.556214    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:54.556241    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:54.556284    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:54.556295    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:54.559721    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:55.054592    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:55.054612    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:55.054624    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:55.054630    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:55.057530    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:55.554855    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:55.554874    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:55.554882    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:55.554886    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:55.557320    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:56.055331    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:56.055352    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:56.055361    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:56.055365    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:56.058215    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:56.058278    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:56.554547    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:56.554568    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:56.554579    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:56.554584    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:56.556705    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:57.054552    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:57.054565    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:57.054570    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:57.054572    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:57.056500    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:57.555559    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:57.555585    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:57.555626    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:57.555635    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:57.558863    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:58.054689    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:58.054707    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:58.054737    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:58.054742    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:58.057151    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:58.556315    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:58.556341    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:58.556352    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:58.556365    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:58.559715    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:58.559793    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:59.055113    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:59.055174    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:59.055189    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:59.055197    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:59.058730    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:59.555567    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:59.555594    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:59.555607    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:59.555612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:59.558994    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:00.055486    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:00.055514    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:00.055526    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:00.055533    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:00.058720    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:00.555382    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:00.555401    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:00.555412    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:00.555418    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:00.558653    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:01.055751    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:01.055778    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:01.055790    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:01.055797    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:01.058484    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:01.058546    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:01.556276    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:01.556294    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:01.556304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:01.556307    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:01.558623    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:02.054896    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:02.054920    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:02.054973    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:02.054980    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:02.057416    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:02.554490    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:02.554516    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:02.554557    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:02.554568    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:02.557605    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:03.054883    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:03.054898    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:03.054907    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:03.054913    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:03.057408    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:03.554821    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:03.554844    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:03.554856    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:03.554862    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:03.557821    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:03.557893    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:04.054425    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:04.054474    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:04.054486    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:04.054493    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:04.057361    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:04.555269    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:04.555292    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:04.555303    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:04.555310    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:04.557975    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:05.055439    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:05.055462    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:05.055474    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:05.055480    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:05.058438    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:05.555041    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:05.555066    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:05.555110    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:05.555119    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:05.558183    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:05.558255    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:06.054744    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:06.054767    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:06.054780    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:06.054786    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:06.057960    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:06.554522    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:06.554548    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:06.554560    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:06.554568    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:06.557313    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:07.055173    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:07.055199    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:07.055239    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:07.055247    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:07.058653    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:07.555300    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:07.555317    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:07.555328    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:07.555333    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:07.558041    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:08.055354    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:08.055368    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:08.055376    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:08.055379    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:08.057374    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:08.057433    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:08.555236    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:08.555259    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:08.555270    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:08.555277    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:08.558651    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:09.055614    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:09.055640    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:09.055650    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:09.055683    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:09.058939    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:09.556607    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:09.556630    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:09.556641    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:09.556646    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:09.559951    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:10.056557    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:10.056584    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:10.056595    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:10.056603    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:10.060049    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:10.060123    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:10.555721    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:10.555747    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:10.555758    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:10.555766    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:10.559208    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:11.054718    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:11.054745    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:11.054757    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:11.054765    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:11.058258    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:11.554755    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:11.554775    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:11.554787    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:11.554792    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:11.557852    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:12.054659    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:12.054685    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:12.054725    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:12.054736    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:12.057557    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:12.555786    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:12.555805    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:12.555816    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:12.555825    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:12.558720    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:12.558790    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:13.054520    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:13.054531    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:13.054537    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:13.054541    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:13.056746    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:13.555035    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:13.555056    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:13.555069    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:13.555076    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:13.558241    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:14.055844    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:14.055904    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:14.055918    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:14.055926    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:14.059251    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:14.556682    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:14.556705    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:14.556718    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:14.556724    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:14.560091    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:14.560167    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:15.055321    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:15.055341    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:15.055353    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:15.055358    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:15.058575    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:15.554664    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:15.554684    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:15.554698    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:15.554706    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:15.557939    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:16.055206    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:16.055227    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:16.055238    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:16.055246    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:16.058598    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:16.555194    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:16.555214    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:16.555226    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:16.555232    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:16.558383    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:17.056686    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:17.056714    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:17.056726    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:17.056731    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:17.060029    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:17.060100    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:17.556714    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:17.556740    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:17.556750    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:17.556755    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:17.560141    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:18.054996    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:18.055011    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:18.055019    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:18.055025    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:18.057822    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:18.555828    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:18.555841    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:18.555849    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:18.555854    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:18.558383    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:19.055041    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:19.055065    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:19.055077    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:19.055085    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:19.058023    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:19.555151    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:19.555177    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:19.555188    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:19.555193    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:19.558408    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:19.558484    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:20.055165    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:20.055192    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:20.055253    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:20.055266    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:20.058241    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:20.555361    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:20.555384    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:20.555396    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:20.555404    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:20.558504    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:21.056388    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:21.056411    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:21.056424    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:21.056429    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:21.059536    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:21.554779    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:21.554793    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:21.554802    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:21.554805    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:21.557366    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:22.055736    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:22.055758    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:22.055769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:22.055776    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:22.058591    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:22.058661    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:22.555812    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:22.555836    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:22.555847    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:22.555854    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:22.558948    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:23.056853    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:23.056919    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:23.056944    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:23.056953    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:23.062337    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:52:23.554982    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:23.555000    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:23.555011    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:23.555018    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:23.557644    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:24.054899    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:24.054938    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:24.054947    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:24.054953    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:24.057729    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:24.556586    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:24.556600    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:24.556623    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:24.556627    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:24.558638    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:24.558692    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:25.056076    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:25.056096    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:25.056107    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:25.056114    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:25.058803    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:25.556269    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:25.556291    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:25.556303    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:25.556309    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:25.559377    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:26.055956    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:26.055982    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:26.055993    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:26.056000    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:26.059192    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:26.556280    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:26.556302    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:26.556313    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:26.556321    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:26.559053    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:26.559129    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:27.055476    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:27.055501    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:27.055512    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:27.055518    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:27.059048    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:27.554857    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:27.554875    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:27.554889    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:27.554899    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:27.557516    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:28.056934    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:28.056960    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:28.056970    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:28.056977    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:28.061498    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:52:28.556243    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:28.556264    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:28.556274    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:28.556280    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:28.560054    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:28.560129    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:29.056620    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:29.056646    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:29.056690    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:29.056714    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:29.060206    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:29.555385    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:29.555411    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:29.555422    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:29.555429    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:29.558512    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:30.055471    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:30.055493    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:30.055506    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:30.055514    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:30.058459    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:30.555484    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:30.555504    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:30.555516    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:30.555524    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:30.558311    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:31.054968    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:31.055015    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:31.055027    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:31.055032    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:31.057916    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:31.058060    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:31.556014    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:31.556033    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:31.556044    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:31.556050    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:31.559609    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:32.056534    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:32.056581    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:32.056591    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:32.056597    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:32.059302    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:32.555775    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:32.555794    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:32.555806    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:32.555814    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:32.558491    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:33.057040    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:33.057067    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:33.057077    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:33.057085    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:33.060635    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:33.060713    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:33.555570    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:33.555591    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:33.555602    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:33.555608    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:33.559425    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:34.057120    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:34.057141    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:34.057148    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:34.057153    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:34.060018    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:34.555126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:34.555138    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:34.555146    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:34.555150    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:34.557094    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:35.055444    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:35.055467    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:35.055479    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:35.055486    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:35.058594    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:35.555149    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:35.555197    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:35.555209    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:35.555218    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:35.558115    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:35.558186    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:36.056849    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:36.056876    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:36.056920    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:36.056932    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:36.060766    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:36.555499    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:36.555519    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:36.555528    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:36.555532    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:36.558358    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:37.055144    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:37.055195    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:37.055208    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:37.055215    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:37.058216    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:37.555944    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:37.556001    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:37.556013    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:37.556023    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:37.559260    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:37.559332    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:38.055318    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:38.055338    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:38.055350    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:38.055355    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:38.058181    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:38.555299    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:38.555317    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:38.555329    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:38.555337    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:38.558216    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:39.056988    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:39.057016    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:39.057073    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:39.057083    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:39.060253    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:39.555159    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:39.555181    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:39.555193    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:39.555200    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:39.558336    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:40.055085    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:40.055100    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:40.055105    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:40.055108    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:40.057225    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:40.057326    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:40.556336    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:40.556362    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:40.556374    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:40.556380    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:40.559611    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:41.056619    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:41.056644    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:41.056655    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:41.056661    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:41.060851    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:52:41.555283    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:41.555295    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:41.555302    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:41.555305    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:41.556982    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:42.056943    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:42.056967    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:42.056978    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:42.056985    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:42.060100    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:42.060167    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:42.556338    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:42.556357    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:42.556367    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:42.556377    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:42.559414    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:43.055551    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:43.055573    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:43.055586    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:43.055594    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:43.058624    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:43.555249    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:43.555259    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:43.555264    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:43.555266    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:43.557514    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:44.057256    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:44.057279    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:44.057320    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:44.057332    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:44.060185    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:44.060336    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:44.555282    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:44.555310    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:44.555349    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:44.555359    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:44.557869    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:45.055728    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:45.055742    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:45.055751    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:45.055756    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:45.058016    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:45.556887    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:45.556939    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:45.556953    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:45.556961    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:45.560018    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:46.055302    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:46.055315    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:46.055321    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:46.055324    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:46.059667    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:52:46.555661    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:46.555681    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:46.555693    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:46.555699    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:46.558535    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:46.558625    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:47.055328    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:47.055352    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:47.055364    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:47.055370    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:47.062725    6731 round_trippers.go:574] Response Status: 404 Not Found in 7 milliseconds
	I0819 10:52:47.555663    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:47.555688    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:47.555699    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:47.555706    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:47.557822    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:48.056671    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:48.056687    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:48.056695    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:48.056700    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:48.059006    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:48.555409    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:48.555429    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:48.555441    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:48.555450    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:48.557941    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:49.057092    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:49.057119    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:49.057131    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:49.057137    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:49.060065    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:49.060130    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:49.060145    6731 node_ready.go:38] duration metric: took 4m0.005002355s for node "ha-431000-m03" to be "Ready" ...
	I0819 10:52:49.082024    6731 out.go:201] 
	W0819 10:52:49.103661    6731 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0819 10:52:49.103680    6731 out.go:270] * 
	W0819 10:52:49.104908    6731 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:52:49.166900    6731 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.660449818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.667060942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.667102169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.667230179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 cri-dockerd[1452]: time="2024-08-19T17:48:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bb2d3a2636faf0cfb532ba0f74d5469305e3758ab39cbdf9fa28f8ef5ebf4c3d/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.701236024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.701309443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.701321973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.701403920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.820778563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.820826586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.820837953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.820905001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.876030412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.876130553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.876143392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.876235719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:54 ha-431000 dockerd[1203]: time="2024-08-19T17:48:54.187251071Z" level=info msg="shim disconnected" id=a84c42391a84af02fac8bc4d031f949d77c9b2ceebf766d7c6c36a32ac6a9c95 namespace=moby
	Aug 19 17:48:54 ha-431000 dockerd[1197]: time="2024-08-19T17:48:54.187571465Z" level=info msg="ignoring event" container=a84c42391a84af02fac8bc4d031f949d77c9b2ceebf766d7c6c36a32ac6a9c95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:48:54 ha-431000 dockerd[1203]: time="2024-08-19T17:48:54.187882726Z" level=warning msg="cleaning up after shim disconnected" id=a84c42391a84af02fac8bc4d031f949d77c9b2ceebf766d7c6c36a32ac6a9c95 namespace=moby
	Aug 19 17:48:54 ha-431000 dockerd[1203]: time="2024-08-19T17:48:54.187960780Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:49:06 ha-431000 dockerd[1203]: time="2024-08-19T17:49:06.688629405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:49:06 ha-431000 dockerd[1203]: time="2024-08-19T17:49:06.688666721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:49:06 ha-431000 dockerd[1203]: time="2024-08-19T17:49:06.688675306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:49:06 ha-431000 dockerd[1203]: time="2024-08-19T17:49:06.688795214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bcf3cd19406a4       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       3                   19da8eae0d48a       storage-provisioner
	414908be37c88       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   fd28a05caf8d7       busybox-7dff88458-x7m6m
	51e18fb0428a6       12968670680f4                                                                                         4 minutes ago       Running             kindnet-cni               1                   bb2d3a2636faf       kindnet-lvdbg
	d7843c76d3e01       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   ca4ec932efa63       coredns-6f6b679f8f-vc76p
	a84c42391a84a       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       2                   19da8eae0d48a       storage-provisioner
	29764bad0bc90       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   1d64ea8ea4f81       coredns-6f6b679f8f-hr2qx
	5636b94096fee       ad83b2ca7b09e                                                                                         4 minutes ago       Running             kube-proxy                1                   5627589c9455b       kube-proxy-5l56s
	f4bd8ba2e0437       045733566833c                                                                                         4 minutes ago       Running             kube-controller-manager   2                   1a643a0353bfb       kube-controller-manager-ha-431000
	11f4d59b4fb1d       38af8ddebf499                                                                                         5 minutes ago       Running             kube-vip                  0                   43fb644937b95       kube-vip-ha-431000
	dea4f29e78603       1766f54c897f0                                                                                         5 minutes ago       Running             kube-scheduler            1                   9e839ed84518f       kube-scheduler-ha-431000
	4ed272951c848       045733566833c                                                                                         5 minutes ago       Exited              kube-controller-manager   1                   1a643a0353bfb       kube-controller-manager-ha-431000
	a003b845ec488       604f5db92eaa8                                                                                         5 minutes ago       Running             kube-apiserver            3                   545d8a82cc659       kube-apiserver-ha-431000
	1bac9a6bc6836       2e96e5913fc06                                                                                         5 minutes ago       Running             etcd                      1                   c143d60007e3b       etcd-ha-431000
	4c18dbcc00045       604f5db92eaa8                                                                                         6 minutes ago       Exited              kube-apiserver            2                   5a0fe916eaf1d       kube-apiserver-ha-431000
	da6e4a61b6cf8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Exited              busybox                   0                   6d38fc70c811c       busybox-7dff88458-x7m6m
	b9d1bccf00c94       cbb01a7bd410d                                                                                         24 minutes ago      Exited              coredns                   0                   74fd2f09b011a       coredns-6f6b679f8f-hr2qx
	a3891ab602da5       cbb01a7bd410d                                                                                         24 minutes ago      Exited              coredns                   0                   c3745c7f8fb9f       coredns-6f6b679f8f-vc76p
	37cd2e9ed2f34       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              24 minutes ago      Exited              kindnet-cni               0                   568b6f1ff9aaf       kindnet-lvdbg
	889ab608901bb       ad83b2ca7b09e                                                                                         25 minutes ago      Exited              kube-proxy                0                   fde7b27c3d1a5       kube-proxy-5l56s
	11d9cd3b2f49f       1766f54c897f0                                                                                         25 minutes ago      Exited              kube-scheduler            0                   4c252909f338f       kube-scheduler-ha-431000
	39fe08877284d       2e96e5913fc06                                                                                         25 minutes ago      Exited              etcd                      0                   fc30d54d1b565       etcd-ha-431000
	
	
	==> coredns [29764bad0bc9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57280 - 39922 "HINFO IN 6598223870971274302.2706221343910350861. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01011612s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[281575694]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.217) (total time: 30003ms):
	Trace[281575694]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:48:54.221)
	Trace[281575694]: [30.003763494s] [30.003763494s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1147384648]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.218) (total time: 30003ms):
	Trace[1147384648]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:48:54.221)
	Trace[1147384648]: [30.003739495s] [30.003739495s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[953244717]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.220) (total time: 30001ms):
	Trace[953244717]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:48:54.221)
	Trace[953244717]: [30.001122159s] [30.001122159s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [a3891ab602da] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[384323591]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.607) (total time: 12726ms):
	Trace[384323591]: ---"Objects listed" error:Unauthorized 12726ms (17:45:24.333)
	Trace[384323591]: [12.726289493s] [12.726289493s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[183169271]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.561) (total time: 12772ms):
	Trace[183169271]: ---"Objects listed" error:Unauthorized 12772ms (17:45:24.334)
	Trace[183169271]: [12.77286543s] [12.77286543s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[321930627]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.615) (total time: 12720ms):
	Trace[321930627]: ---"Objects listed" error:Unauthorized 12719ms (17:45:24.334)
	Trace[321930627]: [12.72052183s] [12.72052183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b9d1bccf00c9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[593417891]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.204) (total time: 13131ms):
	Trace[593417891]: ---"Objects listed" error:Unauthorized 13130ms (17:45:24.335)
	Trace[593417891]: [13.131401942s] [13.131401942s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1133648867]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.419) (total time: 12917ms):
	Trace[1133648867]: ---"Objects listed" error:Unauthorized 12916ms (17:45:24.335)
	Trace[1133648867]: [12.917404362s] [12.917404362s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1960632058]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.301) (total time: 13035ms):
	Trace[1960632058]: ---"Objects listed" error:Unauthorized 13034ms (17:45:24.335)
	Trace[1960632058]: [13.035512102s] [13.035512102s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d7843c76d3e0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52034 - 20734 "HINFO IN 58890247287997822.7011696019754483361. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.010598723s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[901481756]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.217) (total time: 30003ms):
	Trace[901481756]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:48:54.220)
	Trace[901481756]: [30.003857838s] [30.003857838s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1030491669]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.220) (total time: 30001ms):
	Trace[1030491669]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:48:54.221)
	Trace[1030491669]: [30.001096527s] [30.001096527s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1524033155]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.217) (total time: 30003ms):
	Trace[1524033155]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:48:54.220)
	Trace[1524033155]: [30.003971024s] [30.003971024s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:27:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:52:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:48:11 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:48:11 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:48:11 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:48:11 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 091fd90bc5e54c778c79f60719f28fee
	  System UUID:                7f844fbb-0000-0000-b5d6-699bdfe1640c
	  Boot ID:                    d77cc3ba-25a4-4e2f-b353-1894538ac2ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x7m6m              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-6f6b679f8f-hr2qx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25m
	  kube-system                 coredns-6f6b679f8f-vc76p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25m
	  kube-system                 etcd-ha-431000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         25m
	  kube-system                 kindnet-lvdbg                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	  kube-system                 kube-apiserver-ha-431000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-ha-431000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-5l56s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-ha-431000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-vip-ha-431000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  Starting                 25m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  25m (x8 over 25m)      kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  25m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     25m (x7 over 25m)      kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    25m (x8 over 25m)      kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 25m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 25m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           25m                    node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  RegisteredNode           24m                    node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  RegisteredNode           7m11s                  node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeNotReady             7m8s                   node-controller  Node ha-431000 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  6m34s (x2 over 25m)    kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x2 over 25m)    kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x2 over 25m)    kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m34s (x2 over 24m)    kubelet          Node ha-431000 status is now: NodeReady
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  RegisteredNode           4m29s                  node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	
	
	Name:               ha-431000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:28:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:52:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:48:05 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:48:05 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:48:05 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:48:05 +0000   Mon, 19 Aug 2024 17:48:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-431000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f78ea9d3ce4f4999bd0f517107045dac
	  System UUID:                decf4e23-0000-0000-95db-084dbcc69753
	  Boot ID:                    30b31def-c649-4af2-9bf8-357051f66687
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2l9lq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 etcd-ha-431000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         24m
	  kube-system                 kindnet-qmgqd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-apiserver-ha-431000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-controller-manager-ha-431000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-5h7j2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-ha-431000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-vip-ha-431000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 24m                    kube-proxy       
	  Normal  Starting                 4m33s                  kube-proxy       
	  Normal  Starting                 7m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)      kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)      kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)      kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                    node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  RegisteredNode           24m                    node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  NodeAllocatableEnforced  7m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    7m23s (x8 over 7m24s)  kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m23s (x7 over 7m24s)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m23s (x8 over 7m24s)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           7m11s                  node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  Starting                 4m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x7 over 4m59s)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  RegisteredNode           4m29s                  node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	
	
	Name:               ha-431000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_42_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:42:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:46:31 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-431000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e639484a1c98402fa6d9e2bb5fe71e03
	  System UUID:                c32a4140-0000-0000-838a-ef53ae6c724a
	  Boot ID:                    65e77bd5-3b1f-49d0-a224-e0cd2d7b346a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wfcpq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kindnet-kcrzx              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-2fn5w           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                    node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  NodeNotReady             7m14s                  node-controller  Node ha-431000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           7m11s                  node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  NodeHasSufficientPID     6m48s (x3 over 10m)    kubelet          Node ha-431000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m48s (x3 over 10m)    kubelet          Node ha-431000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m48s (x3 over 10m)    kubelet          Node ha-431000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                6m48s (x2 over 9m59s)  kubelet          Node ha-431000-m04 status is now: NodeReady
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  RegisteredNode           4m29s                  node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  NodeNotReady             4m6s                   node-controller  Node ha-431000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.037609] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007731] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.940253] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.008173] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.748288] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.215588] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.459478] systemd-fstab-generator[477]: Ignoring "noauto" option for root device
	[  +0.103771] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +1.265239] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.679773] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.260409] systemd-fstab-generator[1163]: Ignoring "noauto" option for root device
	[  +0.102915] systemd-fstab-generator[1175]: Ignoring "noauto" option for root device
	[  +0.106928] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +2.452535] systemd-fstab-generator[1404]: Ignoring "noauto" option for root device
	[  +0.107987] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.113394] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +0.130493] systemd-fstab-generator[1444]: Ignoring "noauto" option for root device
	[  +0.427524] systemd-fstab-generator[1606]: Ignoring "noauto" option for root device
	[  +6.862500] kauditd_printk_skb: 271 callbacks suppressed
	[Aug19 17:48] kauditd_printk_skb: 40 callbacks suppressed
	[ +24.233269] kauditd_printk_skb: 85 callbacks suppressed
	
	
	==> etcd [1bac9a6bc683] <==
	{"level":"info","ts":"2024-08-19T17:48:00.377363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-08-19T17:48:00.377374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 4"}
	{"level":"info","ts":"2024-08-19T17:48:00.377381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-08-19T17:48:00.377390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 7639] sent MsgVote request to c22c1f54a3cc7858 at term 4"}
	{"level":"info","ts":"2024-08-19T17:48:00.378026Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T17:48:00.378094Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:48:00.409374Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T17:48:00.409450Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:48:00.432257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from c22c1f54a3cc7858 at term 4"}
	{"level":"info","ts":"2024-08-19T17:48:00.432302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-08-19T17:48:00.432315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 4"}
	{"level":"info","ts":"2024-08-19T17:48:00.432322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 4"}
	{"level":"warn","ts":"2024-08-19T17:48:00.432865Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.704610384s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-08-19T17:48:00.432910Z","caller":"traceutil/trace.go:171","msg":"trace[1009373033] range","detail":"{range_begin:; range_end:; }","duration":"4.705082403s","start":"2024-08-19T17:47:55.727822Z","end":"2024-08-19T17:48:00.432904Z","steps":["trace[1009373033] 'agreement among raft nodes before linearized reading'  (duration: 4.704609685s)"],"step_count":1}
	{"level":"error","ts":"2024-08-19T17:48:00.432938Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: leader changed\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-19T17:48:00.443156Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-431000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T17:48:00.443469Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:48:00.443876Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:48:00.444023Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T17:48:00.444146Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T17:48:00.445056Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:48:00.445743Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-08-19T17:48:00.446239Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:48:00.446924Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-19T17:48:01.085875Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c22c1f54a3cc7858","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	
	
	==> etcd [39fe08877284] <==
	{"level":"warn","ts":"2024-08-19T17:47:05.166887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.171370368s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-19T17:47:05.166927Z","caller":"traceutil/trace.go:171","msg":"trace[1410457657] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; }","duration":"3.171412779s","start":"2024-08-19T17:47:01.995509Z","end":"2024-08-19T17:47:05.166922Z","steps":["trace[1410457657] 'agreement among raft nodes before linearized reading'  (duration: 3.171369875s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:05.166949Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:47:01.995503Z","time spent":"3.171439259s","remote":"127.0.0.1:54556","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true "}
	2024/08/19 17:47:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-19T17:47:05.171962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.726994729s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-19T17:47:05.172040Z","caller":"traceutil/trace.go:171","msg":"trace[1113597890] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; }","duration":"6.727085676s","start":"2024-08-19T17:46:58.444946Z","end":"2024-08-19T17:47:05.172032Z","steps":["trace[1113597890] 'agreement among raft nodes before linearized reading'  (duration: 6.726993461s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:05.172074Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:46:58.444911Z","time spent":"6.727153442s","remote":"127.0.0.1:54494","response type":"/etcdserverpb.KV/Range","request count":0,"request size":80,"response count":0,"response size":0,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true "}
	2024/08/19 17:47:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-08-19T17:47:05.195528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-19T17:47:05.195597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-19T17:47:05.195611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-19T17:47:05.195621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 7639] sent MsgPreVote request to c22c1f54a3cc7858 at term 3"}
	{"level":"warn","ts":"2024-08-19T17:47:05.231267Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T17:47:05.231399Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T17:47:05.231486Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-19T17:47:05.242251Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242314Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242334Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242429Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242480Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242505Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242537Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.254609Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-19T17:47:05.254703Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-19T17:47:05.254731Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-431000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:52:51 up 5 min,  0 users,  load average: 0.35, 0.50, 0.25
	Linux ha-431000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [37cd2e9ed2f3] <==
	I0819 17:46:23.914700       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:33.918534       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:46:33.918663       1 main.go:299] handling current node
	I0819 17:46:33.918861       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:46:33.918971       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:46:33.919255       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:46:33.919335       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:43.920546       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:46:43.920598       1 main.go:299] handling current node
	I0819 17:46:43.920613       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:46:43.920620       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:46:43.920738       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:46:43.920772       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:53.913617       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:46:53.913747       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:46:53.913917       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:46:53.913949       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:53.914169       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:46:53.914262       1 main.go:299] handling current node
	I0819 17:47:03.921210       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:47:03.921259       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:47:03.921491       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:47:03.921521       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:47:03.922162       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:47:03.922193       1 main.go:299] handling current node
	
	
	==> kindnet [51e18fb0428a] <==
	I0819 17:52:04.908365       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:52:14.910133       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:52:14.910250       1 main.go:299] handling current node
	I0819 17:52:14.910275       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:52:14.910288       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:52:14.910444       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:52:14.910529       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:52:24.907148       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:52:24.907251       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:52:24.907462       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:52:24.907501       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:52:24.907558       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:52:24.907594       1 main.go:299] handling current node
	I0819 17:52:34.914864       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:52:34.914888       1 main.go:299] handling current node
	I0819 17:52:34.914898       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:52:34.914901       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:52:34.914974       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:52:34.914979       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:52:44.919402       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:52:44.919437       1 main.go:299] handling current node
	I0819 17:52:44.919454       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:52:44.919461       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:52:44.919602       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:52:44.919611       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [4c18dbcc0004] <==
	W0819 17:47:06.224404       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224462       1 logging.go:55] [core] [Channel #21 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224512       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224540       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224567       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224707       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224877       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224939       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225060       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225185       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225305       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225473       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225603       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225483       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.223400       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.223780       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224051       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224207       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224914       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225824       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.241577       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.241624       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.242647       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.242737       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.242800       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a003b845ec48] <==
	I0819 17:48:01.313281       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0819 17:48:01.331515       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0819 17:48:01.328782       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0819 17:48:01.331698       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0819 17:48:01.411877       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 17:48:01.413426       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 17:48:01.413779       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 17:48:01.419113       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 17:48:01.429688       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 17:48:01.430281       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0819 17:48:01.430591       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 17:48:01.429696       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 17:48:01.431877       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 17:48:01.432005       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 17:48:01.432301       1 aggregator.go:171] initial CRD sync complete...
	I0819 17:48:01.432436       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 17:48:01.432480       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 17:48:01.432747       1 cache.go:39] Caches are synced for autoregister controller
	I0819 17:48:01.433634       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 17:48:01.446079       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 17:48:01.446288       1 policy_source.go:224] refreshing policies
	I0819 17:48:01.492628       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 17:48:02.319223       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 17:48:25.142240       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 17:49:02.969551       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4ed272951c84] <==
	I0819 17:47:41.490925       1 serving.go:386] Generated self-signed cert in-memory
	I0819 17:47:41.916844       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 17:47:41.916877       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:47:41.919139       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 17:47:41.919369       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 17:47:41.919719       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 17:47:41.919893       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 17:48:01.923605       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [f4bd8ba2e043] <==
	I0819 17:48:23.103861       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0819 17:48:23.108352       1 shared_informer.go:320] Caches are synced for crt configmap
	I0819 17:48:23.118998       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0819 17:48:23.133218       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 17:48:23.569709       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 17:48:23.569745       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 17:48:23.579259       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 17:48:25.098760       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="41.41µs"
	I0819 17:48:25.140780       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="42.296µs"
	I0819 17:48:25.174794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.532424ms"
	I0819 17:48:25.174863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.786µs"
	I0819 17:48:45.310576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:48:45.327813       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:48:45.329932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.670512ms"
	I0819 17:48:45.330669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.87µs"
	I0819 17:48:48.065252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:48:50.387040       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:49:02.979492       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qvf7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qvf7h\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 17:49:02.980150       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"acb20e0e-195e-4196-a326-6cecf7b6a85e", APIVersion:"v1", ResourceVersion:"298", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qvf7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qvf7h": the object has been modified; please apply your changes to the latest version and try again
	I0819 17:49:02.996861       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qvf7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qvf7h\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 17:49:02.997253       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"acb20e0e-195e-4196-a326-6cecf7b6a85e", APIVersion:"v1", ResourceVersion:"298", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qvf7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qvf7h": the object has been modified; please apply your changes to the latest version and try again
	I0819 17:49:03.001503       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="35.707158ms"
	E0819 17:49:03.002380       1 replica_set.go:560] "Unhandled Error" err="sync \"kube-system/coredns-6f6b679f8f\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-6f6b679f8f\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0819 17:49:03.004337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="138.881µs"
	I0819 17:49:03.009397       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="140.999µs"
	
	
	==> kube-proxy [5636b94096fe] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:48:24.349165       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:48:24.367746       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0819 17:48:24.368041       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:48:24.405399       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:48:24.405456       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:48:24.405475       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:48:24.408447       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:48:24.408968       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:48:24.409000       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:48:24.413438       1 config.go:197] "Starting service config controller"
	I0819 17:48:24.414215       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:48:24.414469       1 config.go:326] "Starting node config controller"
	I0819 17:48:24.414498       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:48:24.415820       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:48:24.415879       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:48:24.514730       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:48:24.514769       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:48:24.516651       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [889ab608901b] <==
	E0819 17:44:04.860226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:23.290432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:23.290751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:23.290543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:23.291205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:26.362595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:26.363019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:41.722266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:41.722341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:41.722406       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:41.722425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:54.009699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:54.009972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:45:09.369057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:45:09.369337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:45:30.873553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:45:30.873673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:45:33.945461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:45:33.945676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [11d9cd3b2f49] <==
	E0819 17:45:08.312166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0819 17:45:09.806525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0819 17:45:10.272292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	W0819 17:45:25.011877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:45:25.011937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:28.351281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:45:28.351338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:31.008358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 17:45:31.008417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.186287       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:45:33.186381       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 17:45:36.848394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 17:45:36.848442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0819 17:45:54.148342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.169.0.5:50394->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.169.0.5:50378->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148560       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.169.0.5:50362->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.169.0.5:50356->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.169.0.5:50346->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.149161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.169.0.5:50400->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.149643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.169.0.5:50358->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.149841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.169.0.5:50398->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	I0819 17:47:05.116640       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0819 17:47:05.132838       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 17:47:05.130413       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0819 17:47:05.147031       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dea4f29e7860] <==
	I0819 17:47:41.723714       1 serving.go:386] Generated self-signed cert in-memory
	W0819 17:47:52.174871       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0819 17:47:52.174919       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 17:47:52.174925       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 17:48:01.357387       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 17:48:01.359330       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:48:01.366155       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 17:48:01.366276       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 17:48:01.366447       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 17:48:01.366799       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 17:48:01.470208       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 17:48:33 ha-431000 kubelet[1613]: I0819 17:48:33.730610    1613 scope.go:117] "RemoveContainer" containerID="73731822fbc4d4dbf4db07c6f5dd51b2780032b6962b9f1b913db69256447145"
	Aug 19 17:48:54 ha-431000 kubelet[1613]: I0819 17:48:54.405239    1613 scope.go:117] "RemoveContainer" containerID="e3a7fa32f1ca248b3472364097dfb5d39d7c3e1c77226ba8700d4f57f66fbd4e"
	Aug 19 17:48:54 ha-431000 kubelet[1613]: I0819 17:48:54.405461    1613 scope.go:117] "RemoveContainer" containerID="a84c42391a84af02fac8bc4d031f949d77c9b2ceebf766d7c6c36a32ac6a9c95"
	Aug 19 17:48:54 ha-431000 kubelet[1613]: E0819 17:48:54.405542    1613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e68070ef-bdea-45e6-b7a8-8834534fa616)\"" pod="kube-system/storage-provisioner" podUID="e68070ef-bdea-45e6-b7a8-8834534fa616"
	Aug 19 17:49:06 ha-431000 kubelet[1613]: I0819 17:49:06.639633    1613 scope.go:117] "RemoveContainer" containerID="a84c42391a84af02fac8bc4d031f949d77c9b2ceebf766d7c6c36a32ac6a9c95"
	Aug 19 17:49:33 ha-431000 kubelet[1613]: E0819 17:49:33.664380    1613 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:49:33 ha-431000 kubelet[1613]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:49:33 ha-431000 kubelet[1613]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:49:33 ha-431000 kubelet[1613]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:49:33 ha-431000 kubelet[1613]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:50:33 ha-431000 kubelet[1613]: E0819 17:50:33.663241    1613 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:50:33 ha-431000 kubelet[1613]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:50:33 ha-431000 kubelet[1613]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:50:33 ha-431000 kubelet[1613]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:50:33 ha-431000 kubelet[1613]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:51:33 ha-431000 kubelet[1613]: E0819 17:51:33.662663    1613 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:51:33 ha-431000 kubelet[1613]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:51:33 ha-431000 kubelet[1613]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:51:33 ha-431000 kubelet[1613]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:51:33 ha-431000 kubelet[1613]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:52:33 ha-431000 kubelet[1613]: E0819 17:52:33.672157    1613 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:52:33 ha-431000 kubelet[1613]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:52:33 ha-431000 kubelet[1613]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:52:33 ha-431000 kubelet[1613]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:52:33 ha-431000 kubelet[1613]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (373.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (101.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 node delete m03 -v=7 --alsologtostderr: exit status 80 (1m36.961343505s)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster ha-431000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:52:53.511179    6897 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:52:53.511572    6897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:52:53.511578    6897 out.go:358] Setting ErrFile to fd 2...
	I0819 10:52:53.511582    6897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:52:53.511763    6897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:52:53.512102    6897 mustload.go:65] Loading cluster: ha-431000
	I0819 10:52:53.512399    6897 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:52:53.512728    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.512781    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.521201    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52123
	I0819 10:52:53.521594    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.522003    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.522035    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.522284    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.522397    6897 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:52:53.522477    6897 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:52:53.522573    6897 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:52:53.523558    6897 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:52:53.523794    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.523818    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.532432    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52125
	I0819 10:52:53.532766    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.533102    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.533111    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.533369    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.533480    6897 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:52:53.533863    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.533887    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.542750    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52127
	I0819 10:52:53.543125    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.543481    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.543500    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.543740    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.543847    6897 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:52:53.543943    6897 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:52:53.544044    6897 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6783
	I0819 10:52:53.545043    6897 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:52:53.545307    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.545334    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.554144    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52129
	I0819 10:52:53.554492    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.554863    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.554877    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.555092    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.555201    6897 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:52:53.555564    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.555592    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.564232    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52131
	I0819 10:52:53.564574    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.564920    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.564938    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.565142    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.565254    6897 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:52:53.565343    6897 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:52:53.565430    6897 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 6801
	I0819 10:52:53.566466    6897 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:52:53.566718    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.566740    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.575335    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52133
	I0819 10:52:53.575682    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.576011    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.576022    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.576236    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.576352    6897 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:52:53.576462    6897 api_server.go:166] Checking apiserver status ...
	I0819 10:52:53.577162    6897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:52:53.577182    6897 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:52:53.577299    6897 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:52:53.577379    6897 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:52:53.577479    6897 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:52:53.577566    6897 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:52:53.623453    6897 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2081/cgroup
	W0819 10:52:53.635237    6897 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2081/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:52:53.635311    6897 ssh_runner.go:195] Run: ls
	I0819 10:52:53.638617    6897 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:52:53.641701    6897 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:52:53.663351    6897 out.go:177] * Deleting node m03 from cluster ha-431000
	I0819 10:52:53.715922    6897 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:52:53.716224    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.716249    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.724661    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52137
	I0819 10:52:53.724998    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.725401    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.725433    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.725632    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.725746    6897 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:52:53.725861    6897 mustload.go:65] Loading cluster: ha-431000
	I0819 10:52:53.726049    6897 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:52:53.726264    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.726298    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.734595    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52139
	I0819 10:52:53.734929    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.735262    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.735278    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.735517    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.735644    6897 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:52:53.735745    6897 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:52:53.735824    6897 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:52:53.736811    6897 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:52:53.737083    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.737106    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.745448    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52141
	I0819 10:52:53.745795    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.746144    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.746157    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.746373    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.746482    6897 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:52:53.746829    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.746853    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.755197    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52143
	I0819 10:52:53.755517    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.755891    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.755908    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.756114    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.756212    6897 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:52:53.756286    6897 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:52:53.756356    6897 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6783
	I0819 10:52:53.757337    6897 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:52:53.757568    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.757589    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.765792    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52145
	I0819 10:52:53.766152    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.766471    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.766482    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.766693    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.766809    6897 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:52:53.767152    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.767175    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.775639    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52147
	I0819 10:52:53.776050    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.776420    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.776435    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.776666    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.776794    6897 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:52:53.776894    6897 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:52:53.776992    6897 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 6801
	I0819 10:52:53.778073    6897 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:52:53.778375    6897 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:52:53.778409    6897 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:52:53.786863    6897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52149
	I0819 10:52:53.787234    6897 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:52:53.787604    6897 main.go:141] libmachine: Using API Version  1
	I0819 10:52:53.787620    6897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:52:53.787828    6897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:52:53.787945    6897 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:52:53.788036    6897 api_server.go:166] Checking apiserver status ...
	I0819 10:52:53.788080    6897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:52:53.788090    6897 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:52:53.788199    6897 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:52:53.788283    6897 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:52:53.788372    6897 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:52:53.788462    6897 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:52:53.824331    6897 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2081/cgroup
	W0819 10:52:53.833241    6897 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2081/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:52:53.833297    6897 ssh_runner.go:195] Run: ls
	I0819 10:52:53.836980    6897 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:52:53.841117    6897 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:52:53.841189    6897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl drain ha-431000-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	W0819 10:52:53.906107    6897 node.go:126] kubectl drain node "ha-431000-m03" failed (will continue): sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl drain ha-431000-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "ha-431000-m03" not found
	I0819 10:52:53.906189    6897 ssh_runner.go:195] Run: systemctl --version
	I0819 10:52:53.906215    6897 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:52:53.906361    6897 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:52:53.906452    6897 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:52:53.906542    6897 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:52:53.906627    6897 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:52:53.935261    6897 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0819 10:52:53.983143    6897 node.go:155] successfully reset node "ha-431000-m03"
	I0819 10:52:53.983656    6897 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:52:53.983879    6897 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x414f2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:52:53.984246    6897 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:52:53.984490    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:53.984497    6897 round_trippers.go:469] Request Headers:
	I0819 10:52:53.984505    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:53.984509    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:52:53.984511    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:53.990712    6897 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0819 10:52:53.990850    6897 retry.go:31] will retry after 389.326612ms: nodes "ha-431000-m03" not found
	I0819 10:52:54.382346    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:54.382367    6897 round_trippers.go:469] Request Headers:
	I0819 10:52:54.382379    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:54.382385    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:52:54.382393    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:54.385216    6897 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:54.385271    6897 retry.go:31] will retry after 779.226677ms: nodes "ha-431000-m03" not found
	I0819 10:52:55.164792    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:55.164804    6897 round_trippers.go:469] Request Headers:
	I0819 10:52:55.164811    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:55.164815    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:52:55.164818    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:55.167418    6897 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:55.167479    6897 retry.go:31] will retry after 1.240432634s: nodes "ha-431000-m03" not found
	I0819 10:52:56.408164    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:56.408235    6897 round_trippers.go:469] Request Headers:
	I0819 10:52:56.408262    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:56.408269    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:52:56.408273    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:56.411286    6897 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:56.411364    6897 retry.go:31] will retry after 2.490347025s: nodes "ha-431000-m03" not found
	I0819 10:52:58.902255    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:58.902281    6897 round_trippers.go:469] Request Headers:
	I0819 10:52:58.902292    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:58.902298    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:52:58.902303    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:58.905548    6897 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:58.905624    6897 retry.go:31] will retry after 2.16054294s: nodes "ha-431000-m03" not found
	I0819 10:53:01.067278    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:53:01.067304    6897 round_trippers.go:469] Request Headers:
	I0819 10:53:01.067314    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:53:01.067319    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:53:01.067323    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:53:01.070846    6897 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:53:01.070922    6897 retry.go:31] will retry after 2.124702438s: nodes "ha-431000-m03" not found
	I0819 10:53:03.196436    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:53:03.196448    6897 round_trippers.go:469] Request Headers:
	I0819 10:53:03.196454    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:53:03.196457    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:53:03.196460    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:53:03.198635    6897 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:53:03.198712    6897 retry.go:31] will retry after 4.38176567s: nodes "ha-431000-m03" not found
	I0819 10:53:07.585222    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:53:07.585244    6897 round_trippers.go:469] Request Headers:
	I0819 10:53:07.585256    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:53:07.585262    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:53:07.585268    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:53:07.588662    6897 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:53:07.588797    6897 retry.go:31] will retry after 12.275993753s: nodes "ha-431000-m03" not found
	I0819 10:53:19.869104    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:53:19.869161    6897 round_trippers.go:469] Request Headers:
	I0819 10:53:19.869174    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:53:19.869181    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:53:19.869189    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:53:19.872427    6897 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:53:19.872588    6897 retry.go:31] will retry after 18.830240545s: nodes "ha-431000-m03" not found
	I0819 10:53:38.705675    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:53:38.705699    6897 round_trippers.go:469] Request Headers:
	I0819 10:53:38.705711    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:53:38.705719    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:53:38.705727    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:53:38.708958    6897 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:53:38.709060    6897 retry.go:31] will retry after 28.077159433s: nodes "ha-431000-m03" not found
	I0819 10:54:06.787803    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:54:06.787823    6897 round_trippers.go:469] Request Headers:
	I0819 10:54:06.787835    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:54:06.787841    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:54:06.787846    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:54:06.790850    6897 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:54:06.790909    6897 retry.go:31] will retry after 23.540263924s: nodes "ha-431000-m03" not found
	I0819 10:54:30.332170    6897 round_trippers.go:463] DELETE https://192.169.0.254:8443/api/v1/nodes/ha-431000-m03
	I0819 10:54:30.332189    6897 round_trippers.go:469] Request Headers:
	I0819 10:54:30.332202    6897 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:54:30.332210    6897 round_trippers.go:473]     Content-Type: application/json
	I0819 10:54:30.332217    6897 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:54:30.335189    6897 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	E0819 10:54:30.335280    6897 node.go:177] kubectl delete node "ha-431000-m03" failed: nodes "ha-431000-m03" not found
	I0819 10:54:30.357018    6897 out.go:201] 
	W0819 10:54:30.377731    6897 out.go:270] X Exiting due to GUEST_NODE_DELETE: deleting node: nodes "ha-431000-m03" not found
	X Exiting due to GUEST_NODE_DELETE: deleting node: nodes "ha-431000-m03" not found
	W0819 10:54:30.377749    6897 out.go:270] * 
	* 
	W0819 10:54:30.381121    6897 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:54:30.402838    6897 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-amd64 -p ha-431000 node delete m03 -v=7 --alsologtostderr": exit status 80
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 7 (368.314781ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m03
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:54:30.484116    6917 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:54:30.484396    6917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:54:30.484402    6917 out.go:358] Setting ErrFile to fd 2...
	I0819 10:54:30.484407    6917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:54:30.484595    6917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:54:30.484797    6917 out.go:352] Setting JSON to false
	I0819 10:54:30.484816    6917 mustload.go:65] Loading cluster: ha-431000
	I0819 10:54:30.484859    6917 notify.go:220] Checking for updates...
	I0819 10:54:30.485134    6917 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:54:30.485149    6917 status.go:255] checking status of ha-431000 ...
	I0819 10:54:30.485503    6917 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:54:30.485546    6917 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:54:30.494602    6917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52156
	I0819 10:54:30.494929    6917 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:54:30.495318    6917 main.go:141] libmachine: Using API Version  1
	I0819 10:54:30.495345    6917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:54:30.495554    6917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:54:30.495677    6917 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:54:30.495757    6917 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:54:30.495832    6917 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:54:30.496789    6917 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0819 10:54:30.496809    6917 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:54:30.497043    6917 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:54:30.497069    6917 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:54:30.505502    6917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52158
	I0819 10:54:30.505811    6917 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:54:30.506173    6917 main.go:141] libmachine: Using API Version  1
	I0819 10:54:30.506194    6917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:54:30.506392    6917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:54:30.506515    6917 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:54:30.506596    6917 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:54:30.506835    6917 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:54:30.506868    6917 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:54:30.519033    6917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52160
	I0819 10:54:30.519377    6917 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:54:30.519721    6917 main.go:141] libmachine: Using API Version  1
	I0819 10:54:30.519732    6917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:54:30.519913    6917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:54:30.519999    6917 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:54:30.520129    6917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:54:30.520151    6917 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:54:30.520221    6917 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:54:30.520293    6917 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:54:30.520365    6917 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:54:30.520448    6917 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:54:30.549152    6917 ssh_runner.go:195] Run: systemctl --version
	I0819 10:54:30.553659    6917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:54:30.566070    6917 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:54:30.566094    6917 api_server.go:166] Checking apiserver status ...
	I0819 10:54:30.566140    6917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:54:30.578523    6917 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2081/cgroup
	W0819 10:54:30.586636    6917 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2081/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:54:30.586692    6917 ssh_runner.go:195] Run: ls
	I0819 10:54:30.589826    6917 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:54:30.594071    6917 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:54:30.594083    6917 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0819 10:54:30.594096    6917 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:54:30.594106    6917 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:54:30.594379    6917 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:54:30.594405    6917 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:54:30.603198    6917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52164
	I0819 10:54:30.603536    6917 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:54:30.603906    6917 main.go:141] libmachine: Using API Version  1
	I0819 10:54:30.603923    6917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:54:30.604140    6917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:54:30.604247    6917 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:54:30.604333    6917 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:54:30.604412    6917 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6783
	I0819 10:54:30.605377    6917 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0819 10:54:30.605387    6917 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:54:30.605632    6917 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:54:30.605661    6917 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:54:30.614308    6917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52166
	I0819 10:54:30.614652    6917 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:54:30.615002    6917 main.go:141] libmachine: Using API Version  1
	I0819 10:54:30.615019    6917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:54:30.615219    6917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:54:30.615319    6917 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:54:30.615405    6917 host.go:66] Checking if "ha-431000-m02" exists ...
	I0819 10:54:30.615644    6917 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:54:30.615668    6917 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:54:30.624146    6917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52168
	I0819 10:54:30.624467    6917 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:54:30.624803    6917 main.go:141] libmachine: Using API Version  1
	I0819 10:54:30.624814    6917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:54:30.625014    6917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:54:30.625139    6917 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:54:30.625262    6917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:54:30.625273    6917 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:54:30.625351    6917 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:54:30.625436    6917 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:54:30.625530    6917 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:54:30.625596    6917 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:54:30.662562    6917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:54:30.674335    6917 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:54:30.674353    6917 api_server.go:166] Checking apiserver status ...
	I0819 10:54:30.674394    6917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:54:30.686538    6917 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2106/cgroup
	W0819 10:54:30.694573    6917 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2106/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:54:30.694614    6917 ssh_runner.go:195] Run: ls
	I0819 10:54:30.697917    6917 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0819 10:54:30.700999    6917 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0819 10:54:30.701011    6917 status.go:422] ha-431000-m02 apiserver status = Running (err=<nil>)
	I0819 10:54:30.701019    6917 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:54:30.701036    6917 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:54:30.701305    6917 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:54:30.701325    6917 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:54:30.710186    6917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52172
	I0819 10:54:30.710523    6917 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:54:30.710836    6917 main.go:141] libmachine: Using API Version  1
	I0819 10:54:30.710845    6917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:54:30.711059    6917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:54:30.711168    6917 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:54:30.711253    6917 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:54:30.711341    6917 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 6801
	I0819 10:54:30.712325    6917 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0819 10:54:30.712334    6917 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:54:30.712580    6917 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:54:30.712605    6917 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:54:30.721293    6917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52174
	I0819 10:54:30.721618    6917 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:54:30.721962    6917 main.go:141] libmachine: Using API Version  1
	I0819 10:54:30.721978    6917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:54:30.722209    6917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:54:30.722323    6917 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:54:30.722405    6917 host.go:66] Checking if "ha-431000-m03" exists ...
	I0819 10:54:30.722654    6917 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:54:30.722685    6917 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:54:30.731388    6917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52176
	I0819 10:54:30.731724    6917 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:54:30.732048    6917 main.go:141] libmachine: Using API Version  1
	I0819 10:54:30.732057    6917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:54:30.732281    6917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:54:30.732404    6917 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:54:30.732532    6917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 10:54:30.732543    6917 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:54:30.732623    6917 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:54:30.732694    6917 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:54:30.732764    6917 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:54:30.732839    6917 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:54:30.761628    6917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:54:30.774116    6917 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0819 10:54:30.774132    6917 api_server.go:166] Checking apiserver status ...
	I0819 10:54:30.774174    6917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0819 10:54:30.783675    6917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:54:30.783687    6917 status.go:422] ha-431000-m03 apiserver status = Stopped (err=<nil>)
	I0819 10:54:30.783702    6917 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:54:30.783713    6917 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:54:30.783986    6917 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:54:30.784007    6917 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:54:30.792881    6917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52179
	I0819 10:54:30.793231    6917 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:54:30.793548    6917 main.go:141] libmachine: Using API Version  1
	I0819 10:54:30.793575    6917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:54:30.793796    6917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:54:30.793898    6917 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:54:30.793976    6917 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:54:30.794062    6917 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:54:30.794996    6917 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid 6186 missing from process table
	I0819 10:54:30.795042    6917 status.go:330] ha-431000-m04 host status = "Stopped" (err=<nil>)
	I0819 10:54:30.795051    6917 status.go:343] host is not running, skipping remaining checks
	I0819 10:54:30.795058    6917 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (3.334895947s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT |                     |
	|         | busybox-7dff88458-wfcpq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-431000 -v=7                | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-431000 node stop m02 -v=7         | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:43 PDT | 19 Aug 24 10:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-431000 node start m02 -v=7        | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:45 PDT | 19 Aug 24 10:45 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-431000 -v=7               | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:46 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-431000 -v=7                    | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:46 PDT | 19 Aug 24 10:47 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-431000 --wait=true -v=7        | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:47 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-431000                    | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:52 PDT |                     |
	| node    | ha-431000 node delete m03 -v=7       | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:52 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:47:12
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:47:12.990834    6731 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:47:12.991103    6731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:47:12.991108    6731 out.go:358] Setting ErrFile to fd 2...
	I0819 10:47:12.991112    6731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:47:12.991281    6731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:47:12.992723    6731 out.go:352] Setting JSON to false
	I0819 10:47:13.017592    6731 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4603,"bootTime":1724085030,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:47:13.017712    6731 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:47:13.040160    6731 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:47:13.085144    6731 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:47:13.085199    6731 notify.go:220] Checking for updates...
	I0819 10:47:13.129094    6731 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:13.150001    6731 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:47:13.191985    6731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:47:13.234991    6731 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:47:13.255968    6731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:47:13.277879    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:13.278061    6731 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:47:13.278758    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:13.278849    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:13.288403    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52017
	I0819 10:47:13.288766    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:13.289188    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:13.289197    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:13.289457    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:13.289596    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:13.317906    6731 out.go:177] * Using the hyperkit driver based on existing profile
	I0819 10:47:13.359906    6731 start.go:297] selected driver: hyperkit
	I0819 10:47:13.359936    6731 start.go:901] validating driver "hyperkit" against &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:47:13.360173    6731 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:47:13.360383    6731 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:47:13.360591    6731 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:47:13.373620    6731 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:47:13.379058    6731 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:13.379083    6731 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:47:13.382480    6731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:47:13.382556    6731 cni.go:84] Creating CNI manager for ""
	I0819 10:47:13.382566    6731 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 10:47:13.382642    6731 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:47:13.382745    6731 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:47:13.427064    6731 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:47:13.448053    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:47:13.448130    6731 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:47:13.448197    6731 cache.go:56] Caching tarball of preloaded images
	I0819 10:47:13.448409    6731 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:47:13.448432    6731 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:47:13.448617    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:13.449596    6731 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:47:13.449728    6731 start.go:364] duration metric: took 105.822µs to acquireMachinesLock for "ha-431000"
	I0819 10:47:13.449768    6731 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:47:13.449785    6731 fix.go:54] fixHost starting: 
	I0819 10:47:13.450204    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:13.450230    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:13.463559    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52019
	I0819 10:47:13.464010    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:13.464458    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:13.464469    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:13.464831    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:13.465014    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:13.465167    6731 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:47:13.465295    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:13.465439    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 4802
	I0819 10:47:13.466971    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid 4802 missing from process table
	I0819 10:47:13.467037    6731 fix.go:112] recreateIfNeeded on ha-431000: state=Stopped err=<nil>
	I0819 10:47:13.467066    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	W0819 10:47:13.467199    6731 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:47:13.510101    6731 out.go:177] * Restarting existing hyperkit VM for "ha-431000" ...
	I0819 10:47:13.531063    6731 main.go:141] libmachine: (ha-431000) Calling .Start
	I0819 10:47:13.531337    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:13.531403    6731 main.go:141] libmachine: (ha-431000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:47:13.533562    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid 4802 missing from process table
	I0819 10:47:13.533575    6731 main.go:141] libmachine: (ha-431000) DBG | pid 4802 is in state "Stopped"
	I0819 10:47:13.533592    6731 main.go:141] libmachine: (ha-431000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid...
	I0819 10:47:13.534063    6731 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:47:13.685824    6731 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:47:13.685856    6731 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:47:13.685937    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c10e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:13.685980    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c10e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:13.686041    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:47:13.686089    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:47:13.686116    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:47:13.687515    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 DEBUG: hyperkit: Pid is 6743
	I0819 10:47:13.687875    6731 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:47:13.687888    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:13.687950    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:47:13.689549    6731 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:47:13.689620    6731 main.go:141] libmachine: (ha-431000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:47:13.689637    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d62c}
	I0819 10:47:13.689650    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 10:47:13.689661    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:47:13.689670    6731 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d1f7}
	I0819 10:47:13.689679    6731 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:47:13.689685    6731 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:47:13.689750    6731 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:47:13.690466    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:13.690696    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:13.691360    6731 machine.go:93] provisionDockerMachine start ...
	I0819 10:47:13.691391    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:13.691550    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:13.691652    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:13.691765    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:13.691853    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:13.691949    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:13.692101    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:13.692310    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:13.692319    6731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:47:13.695286    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:47:13.768567    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:47:13.769376    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:13.769389    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:13.769397    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:13.769403    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:14.169410    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:47:14.169434    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:47:14.284387    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:14.284423    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:14.284433    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:14.284452    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:14.285281    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:47:14.285292    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:47:20.122707    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:47:20.122768    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:47:20.122798    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:47:20.146889    6731 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:47:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:47:24.038753    6731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.5:22: connect: connection refused
	I0819 10:47:27.097051    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:47:27.097068    6731 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:47:27.097216    6731 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:47:27.097227    6731 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:47:27.097372    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.097464    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.097585    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.097687    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.097778    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.097909    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.098097    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.098119    6731 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:47:27.159700    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:47:27.159721    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.159879    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.159986    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.160071    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.160158    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.160304    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.160447    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.160458    6731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:47:27.217596    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:47:27.217617    6731 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:47:27.217642    6731 buildroot.go:174] setting up certificates
	I0819 10:47:27.217648    6731 provision.go:84] configureAuth start
	I0819 10:47:27.217654    6731 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:47:27.217789    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:27.217907    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.218009    6731 provision.go:143] copyHostCerts
	I0819 10:47:27.218040    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:27.218106    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:47:27.218115    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:27.219007    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:47:27.219230    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:27.219271    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:47:27.219275    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:27.219362    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:47:27.219509    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:27.219546    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:47:27.219551    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:27.219626    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:47:27.219767    6731 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:47:27.270993    6731 provision.go:177] copyRemoteCerts
	I0819 10:47:27.271039    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:47:27.271051    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.271175    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.271261    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.271352    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.271445    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:27.302754    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:47:27.302826    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:47:27.322815    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:47:27.322877    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0819 10:47:27.342451    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:47:27.342511    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:47:27.362246    6731 provision.go:87] duration metric: took 144.581948ms to configureAuth
	I0819 10:47:27.362260    6731 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:47:27.362446    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:27.362461    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:27.362588    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.362675    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.362776    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.362858    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.362949    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.363077    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.363202    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.363214    6731 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:47:27.413858    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:47:27.413870    6731 buildroot.go:70] root file system type: tmpfs
	I0819 10:47:27.413956    6731 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:47:27.413972    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.414097    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.414209    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.414293    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.414367    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.414499    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.414633    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.414678    6731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:47:27.476805    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:47:27.476825    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:27.476950    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:27.477051    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.477141    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:27.477235    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:27.477363    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:27.477517    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:27.477530    6731 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:47:29.141388    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:47:29.141402    6731 machine.go:96] duration metric: took 15.449700536s to provisionDockerMachine
	I0819 10:47:29.141419    6731 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:47:29.141427    6731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:47:29.141442    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.141639    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:47:29.141653    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.141751    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.141838    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.141944    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.142024    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.177773    6731 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:47:29.182929    6731 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:47:29.182945    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:47:29.183045    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:47:29.183232    6731 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:47:29.183239    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:47:29.183446    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:47:29.193329    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:29.226539    6731 start.go:296] duration metric: took 85.108142ms for postStartSetup
	I0819 10:47:29.226566    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.226743    6731 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:47:29.226766    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.226881    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.226983    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.227075    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.227158    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.259218    6731 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:47:29.259277    6731 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:47:29.313364    6731 fix.go:56] duration metric: took 15.863243842s for fixHost
	I0819 10:47:29.313386    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.313537    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.313631    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.313718    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.313802    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.313927    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:29.314073    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:47:29.314080    6731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:47:29.366201    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089649.282494519
	
	I0819 10:47:29.366218    6731 fix.go:216] guest clock: 1724089649.282494519
	I0819 10:47:29.366223    6731 fix.go:229] Guest: 2024-08-19 10:47:29.282494519 -0700 PDT Remote: 2024-08-19 10:47:29.313376 -0700 PDT m=+16.361598467 (delta=-30.881481ms)
	I0819 10:47:29.366239    6731 fix.go:200] guest clock delta is within tolerance: -30.881481ms
	I0819 10:47:29.366243    6731 start.go:83] releasing machines lock for "ha-431000", held for 15.916161384s
	I0819 10:47:29.366262    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366404    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:29.366507    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366799    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366892    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:29.366979    6731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:47:29.367012    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.367029    6731 ssh_runner.go:195] Run: cat /version.json
	I0819 10:47:29.367039    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:29.367114    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.367149    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:29.367227    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.367237    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:29.367322    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.367335    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:29.367423    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.367436    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:29.444266    6731 ssh_runner.go:195] Run: systemctl --version
	I0819 10:47:29.449674    6731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:47:29.454027    6731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:47:29.454072    6731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:47:29.466466    6731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:47:29.466477    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:29.466578    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:29.483411    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:47:29.492453    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:47:29.501213    6731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:47:29.501260    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:47:29.510090    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:29.519075    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:47:29.528065    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:29.536949    6731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:47:29.545786    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:47:29.554573    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:47:29.563322    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:47:29.572057    6731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:47:29.579919    6731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:47:29.588348    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:29.686832    6731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:47:29.707105    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:29.707180    6731 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:47:29.719452    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:29.730098    6731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:47:29.745544    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:29.756577    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:29.767542    6731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:47:29.790919    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:29.802179    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:29.816853    6731 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:47:29.819743    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:47:29.827667    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:47:29.841027    6731 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:47:29.941968    6731 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:47:30.045493    6731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:47:30.045564    6731 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:47:30.059349    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:30.153983    6731 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:47:32.475528    6731 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.321474833s)
	I0819 10:47:32.475593    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:47:32.486499    6731 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:47:32.499892    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:32.510342    6731 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:47:32.602953    6731 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:47:32.726572    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:32.829541    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:47:32.850769    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:32.861330    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:32.957342    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:47:33.019734    6731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:47:33.019811    6731 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:47:33.024665    6731 start.go:563] Will wait 60s for crictl version
	I0819 10:47:33.024717    6731 ssh_runner.go:195] Run: which crictl
	I0819 10:47:33.028242    6731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:47:33.053696    6731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:47:33.053765    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:33.070786    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:33.110368    6731 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:47:33.110419    6731 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:47:33.110842    6731 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:47:33.115455    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:33.125038    6731 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:47:33.125131    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:47:33.125186    6731 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:47:33.138502    6731 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0819 10:47:33.138514    6731 docker.go:615] Images already preloaded, skipping extraction
	I0819 10:47:33.138587    6731 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:47:33.152253    6731 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0819 10:47:33.152273    6731 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:47:33.152286    6731 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:47:33.152388    6731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:47:33.152487    6731 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:47:33.188995    6731 cni.go:84] Creating CNI manager for ""
	I0819 10:47:33.189008    6731 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 10:47:33.189020    6731 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:47:33.189037    6731 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:47:33.189121    6731 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:47:33.189137    6731 kube-vip.go:115] generating kube-vip config ...
	I0819 10:47:33.189189    6731 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:47:33.201830    6731 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:47:33.201940    6731 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:47:33.201997    6731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:47:33.210450    6731 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:47:33.210495    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:47:33.217871    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:47:33.231674    6731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:47:33.245013    6731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:47:33.259054    6731 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:47:33.272685    6731 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:47:33.275698    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:33.285047    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:33.385931    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:47:33.400131    6731 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:47:33.400143    6731 certs.go:194] generating shared ca certs ...
	I0819 10:47:33.400154    6731 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:33.400345    6731 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:47:33.400418    6731 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:47:33.400428    6731 certs.go:256] generating profile certs ...
	I0819 10:47:33.400545    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:47:33.400566    6731 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59
	I0819 10:47:33.400581    6731 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0819 10:47:33.706693    6731 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59 ...
	I0819 10:47:33.706714    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59: {Name:mk3ef913d0a2b6704747c9cac46f692f95ca83d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:33.707051    6731 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59 ...
	I0819 10:47:33.707062    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59: {Name:mk47cdc11bd849114252b3917882ba0c41ebb9fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:33.707265    6731 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.cbca8d59 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:47:33.707470    6731 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:47:33.707706    6731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:47:33.707719    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:47:33.707742    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:47:33.707763    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:47:33.707783    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:47:33.707800    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:47:33.707818    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:47:33.707836    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:47:33.707854    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:47:33.707965    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:47:33.708012    6731 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:47:33.708021    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:47:33.708051    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:47:33.708080    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:47:33.708108    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:47:33.708172    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:33.708203    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:47:33.708224    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:47:33.708242    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:33.708696    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:47:33.750639    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:47:33.793357    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:47:33.817739    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:47:33.839363    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 10:47:33.859538    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 10:47:33.879468    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:47:33.899477    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:47:33.919387    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:47:33.939367    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:47:33.959111    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:47:33.978053    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:47:33.991986    6731 ssh_runner.go:195] Run: openssl version
	I0819 10:47:33.996321    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:47:34.004824    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:47:34.008214    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:47:34.008253    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:47:34.012526    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:47:34.020744    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:47:34.029254    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:47:34.032767    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:47:34.032806    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:47:34.037138    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:47:34.045595    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:47:34.053763    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:34.057262    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:34.057304    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:34.061509    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:47:34.070103    6731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:47:34.073578    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 10:47:34.078201    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 10:47:34.082612    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 10:47:34.087103    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 10:47:34.091437    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 10:47:34.095760    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 10:47:34.100115    6731 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:47:34.100230    6731 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:47:34.113393    6731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:47:34.120906    6731 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 10:47:34.120917    6731 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 10:47:34.120957    6731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 10:47:34.128485    6731 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:47:34.128797    6731 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-431000" does not appear in /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:34.128883    6731 kubeconfig.go:62] /Users/jenkins/minikube-integration/19478-1622/kubeconfig needs updating (will repair): [kubeconfig missing "ha-431000" cluster setting kubeconfig missing "ha-431000" context setting]
	I0819 10:47:34.129058    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:34.129469    6731 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:34.129662    6731 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1139f2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:47:34.129951    6731 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:47:34.130122    6731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 10:47:34.137350    6731 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0819 10:47:34.137364    6731 kubeadm.go:597] duration metric: took 16.443406ms to restartPrimaryControlPlane
	I0819 10:47:34.137370    6731 kubeadm.go:394] duration metric: took 37.259659ms to StartCluster
	I0819 10:47:34.137379    6731 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:34.137458    6731 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:34.137795    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:34.138049    6731 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:47:34.138062    6731 start.go:241] waiting for startup goroutines ...
	I0819 10:47:34.138093    6731 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:47:34.138228    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:34.182792    6731 out.go:177] * Enabled addons: 
	I0819 10:47:34.203662    6731 addons.go:510] duration metric: took 65.572958ms for enable addons: enabled=[]
	I0819 10:47:34.203791    6731 start.go:246] waiting for cluster config update ...
	I0819 10:47:34.203803    6731 start.go:255] writing updated cluster config ...
	I0819 10:47:34.226648    6731 out.go:201] 
	I0819 10:47:34.250149    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:34.250276    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:34.272715    6731 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:47:34.314737    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:47:34.314772    6731 cache.go:56] Caching tarball of preloaded images
	I0819 10:47:34.314979    6731 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:47:34.315025    6731 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:47:34.315140    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:34.316055    6731 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:47:34.316175    6731 start.go:364] duration metric: took 95.252µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:47:34.316201    6731 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:47:34.316218    6731 fix.go:54] fixHost starting: m02
	I0819 10:47:34.316649    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:34.316675    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:34.325824    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52042
	I0819 10:47:34.326364    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:34.326725    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:34.326734    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:34.326990    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:34.327207    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:34.327371    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:47:34.327556    6731 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:34.327684    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6436
	I0819 10:47:34.328623    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 6436 missing from process table
	I0819 10:47:34.328664    6731 fix.go:112] recreateIfNeeded on ha-431000-m02: state=Stopped err=<nil>
	I0819 10:47:34.328674    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	W0819 10:47:34.328799    6731 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:47:34.376702    6731 out.go:177] * Restarting existing hyperkit VM for "ha-431000-m02" ...
	I0819 10:47:34.397748    6731 main.go:141] libmachine: (ha-431000-m02) Calling .Start
	I0819 10:47:34.398040    6731 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:34.398181    6731 main.go:141] libmachine: (ha-431000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:47:34.399890    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 6436 missing from process table
	I0819 10:47:34.399903    6731 main.go:141] libmachine: (ha-431000-m02) DBG | pid 6436 is in state "Stopped"
	I0819 10:47:34.399920    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid...
	I0819 10:47:34.400291    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:47:34.428075    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:47:34.428103    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:47:34.428232    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003af200)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:34.428264    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003af200)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:47:34.428356    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:47:34.428395    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:47:34.428414    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:47:34.429765    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 DEBUG: hyperkit: Pid is 6783
	I0819 10:47:34.430472    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:47:34.430523    6731 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:34.430650    6731 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6783
	I0819 10:47:34.432548    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:47:34.432573    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:47:34.432586    6731 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d6ab}
	I0819 10:47:34.432599    6731 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d62c}
	I0819 10:47:34.432608    6731 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:47:34.432619    6731 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:47:34.432669    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:47:34.433339    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:34.433544    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:47:34.434121    6731 machine.go:93] provisionDockerMachine start ...
	I0819 10:47:34.434131    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:34.434259    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:34.434360    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:34.434461    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:34.434563    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:34.434665    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:34.434786    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:34.434931    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:34.434939    6731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:47:34.437670    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:47:34.446364    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:47:34.447557    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:34.447574    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:34.447585    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:34.447595    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:34.831206    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:47:34.831223    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:47:34.946012    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:47:34.946044    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:47:34.946065    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:47:34.946082    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:47:34.946901    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:47:34.946912    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:47:40.531269    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:47:40.531330    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:47:40.531340    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:47:40.556233    6731 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:47:45.507448    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:47:45.507462    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:47:45.507581    6731 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:47:45.507593    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:47:45.507670    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.507776    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.507909    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.507996    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.508101    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.508234    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.508381    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.508389    6731 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:47:45.583754    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:47:45.583774    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.583905    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.584002    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.584099    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.584184    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.584323    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.584482    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.584494    6731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:47:45.658171    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:47:45.658187    6731 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:47:45.658197    6731 buildroot.go:174] setting up certificates
	I0819 10:47:45.658205    6731 provision.go:84] configureAuth start
	I0819 10:47:45.658211    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:47:45.658365    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:45.658474    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.658558    6731 provision.go:143] copyHostCerts
	I0819 10:47:45.658585    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:45.658635    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:47:45.658641    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:47:45.658762    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:47:45.658966    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:45.658995    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:47:45.658999    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:47:45.659067    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:47:45.659209    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:45.659236    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:47:45.659241    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:47:45.659309    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:47:45.659487    6731 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:47:45.772365    6731 provision.go:177] copyRemoteCerts
	I0819 10:47:45.772449    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:47:45.772468    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.772616    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.772719    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.772815    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.772905    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:45.813424    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:47:45.813495    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:47:45.833296    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:47:45.833365    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:47:45.853251    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:47:45.853315    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:47:45.873370    6731 provision.go:87] duration metric: took 215.153593ms to configureAuth
	I0819 10:47:45.873384    6731 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:47:45.873555    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:45.873574    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:45.873707    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.873815    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.873904    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.874006    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.874106    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.874221    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.874350    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.874357    6731 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:47:45.937816    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:47:45.937826    6731 buildroot.go:70] root file system type: tmpfs
	I0819 10:47:45.937934    6731 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:47:45.937947    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:45.938086    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:45.938186    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.938276    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:45.938370    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:45.938507    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:45.938641    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:45.938689    6731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:47:46.014680    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:47:46.014697    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:46.014833    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:46.014924    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:46.015010    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:46.015092    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:46.015215    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:46.015354    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:46.015366    6731 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:47:47.693084    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:47:47.693099    6731 machine.go:96] duration metric: took 13.258686385s to provisionDockerMachine
	I0819 10:47:47.693106    6731 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:47:47.693114    6731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:47:47.693124    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:47.693322    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:47:47.693338    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:47.693428    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:47.693543    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.693661    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:47.693761    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:47.738652    6731 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:47:47.742121    6731 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:47:47.742133    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:47:47.742223    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:47:47.742376    6731 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:47:47.742383    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:47:47.742539    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:47:47.750138    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:47.780304    6731 start.go:296] duration metric: took 87.187547ms for postStartSetup
	I0819 10:47:47.780325    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:47.780489    6731 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:47:47.780503    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:47.780584    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:47.780680    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.780768    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:47.780844    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:47.820828    6731 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:47:47.820883    6731 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:47:47.874212    6731 fix.go:56] duration metric: took 13.557703241s for fixHost
	I0819 10:47:47.874239    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:47.874390    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:47.874493    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.874580    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:47.874675    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:47.874801    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:47:47.874942    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:47:47.874950    6731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:47:47.939805    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089667.971112519
	
	I0819 10:47:47.939818    6731 fix.go:216] guest clock: 1724089667.971112519
	I0819 10:47:47.939826    6731 fix.go:229] Guest: 2024-08-19 10:47:47.971112519 -0700 PDT Remote: 2024-08-19 10:47:47.874228 -0700 PDT m=+34.922052537 (delta=96.884519ms)
	I0819 10:47:47.939836    6731 fix.go:200] guest clock delta is within tolerance: 96.884519ms
	I0819 10:47:47.939839    6731 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.623361057s
	I0819 10:47:47.939855    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:47.939978    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:47.963353    6731 out.go:177] * Found network options:
	I0819 10:47:47.984541    6731 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:47:48.006564    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:47:48.006602    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:48.007422    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:48.007661    6731 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:47:48.007799    6731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:47:48.007841    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:47:48.007857    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:47:48.007960    6731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:47:48.007982    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:47:48.008073    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:48.008275    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:48.008303    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:47:48.008450    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:48.008512    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:47:48.008705    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:47:48.008702    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:47:48.008832    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:47:48.046347    6731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:47:48.046407    6731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:47:48.092373    6731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:47:48.092395    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:48.092498    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:48.108693    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:47:48.117700    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:47:48.126528    6731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:47:48.126570    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:47:48.135370    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:48.144295    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:47:48.153239    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:47:48.162188    6731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:47:48.171097    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:47:48.180126    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:47:48.188940    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:47:48.197810    6731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:47:48.205812    6731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:47:48.213773    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:48.325175    6731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:47:48.347923    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:47:48.347991    6731 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:47:48.361302    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:48.374626    6731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:47:48.389101    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:47:48.399756    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:48.409828    6731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:47:48.432006    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:47:48.442558    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:47:48.457632    6731 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:47:48.460581    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:47:48.467778    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:47:48.481436    6731 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:47:48.581769    6731 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:47:48.698298    6731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:47:48.698327    6731 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:47:48.712343    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:48.807611    6731 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:47:51.175487    6731 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.367806337s)
	I0819 10:47:51.175551    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:47:51.185809    6731 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:47:51.199305    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:51.209999    6731 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:47:51.305659    6731 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:47:51.404114    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:51.515116    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:47:51.528971    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:47:51.540018    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:51.642211    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:47:51.708864    6731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:47:51.708942    6731 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:47:51.713456    6731 start.go:563] Will wait 60s for crictl version
	I0819 10:47:51.713510    6731 ssh_runner.go:195] Run: which crictl
	I0819 10:47:51.719286    6731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:47:51.744566    6731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:47:51.744636    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:51.762063    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:47:51.802673    6731 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:47:51.844258    6731 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:47:51.865266    6731 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:47:51.865575    6731 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:47:51.869247    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:51.879589    6731 mustload.go:65] Loading cluster: ha-431000
	I0819 10:47:51.879763    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:51.879994    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:51.880010    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:51.889072    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52064
	I0819 10:47:51.889483    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:51.889854    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:51.889872    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:51.890119    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:51.890230    6731 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:47:51.890313    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:47:51.890398    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:47:51.891393    6731 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:47:51.891646    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:47:51.891661    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:47:51.900428    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52066
	I0819 10:47:51.900763    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:47:51.901079    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:47:51.901089    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:47:51.901317    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:47:51.901415    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:47:51.901514    6731 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.6
	I0819 10:47:51.901521    6731 certs.go:194] generating shared ca certs ...
	I0819 10:47:51.901534    6731 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:47:51.901670    6731 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:47:51.901723    6731 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:47:51.901732    6731 certs.go:256] generating profile certs ...
	I0819 10:47:51.901831    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:47:51.901922    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.f69e9b91
	I0819 10:47:51.901978    6731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:47:51.901986    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:47:51.902006    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:47:51.902026    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:47:51.902044    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:47:51.902062    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:47:51.902080    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:47:51.902099    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:47:51.902116    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:47:51.902197    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:47:51.902236    6731 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:47:51.902244    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:47:51.902283    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:47:51.902314    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:47:51.902343    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:47:51.902410    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:47:51.902441    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:47:51.902461    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:47:51.902483    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:51.902508    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:47:51.902593    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:47:51.902677    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:47:51.902761    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:47:51.902837    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:47:51.926599    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:47:51.930274    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:47:51.938012    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:47:51.941060    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:47:51.948752    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:47:51.951705    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:47:51.959653    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:47:51.962721    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:47:51.971351    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:47:51.974362    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:47:51.982204    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:47:51.985240    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:47:51.993894    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:47:52.013902    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:47:52.033528    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:47:52.053096    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:47:52.072504    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 10:47:52.091757    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 10:47:52.110982    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:47:52.130616    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:47:52.150337    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:47:52.170242    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:47:52.189881    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:47:52.209131    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:47:52.222937    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:47:52.236606    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:47:52.250135    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:47:52.263801    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:47:52.277449    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:47:52.290914    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:47:52.304537    6731 ssh_runner.go:195] Run: openssl version
	I0819 10:47:52.308871    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:47:52.317959    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:47:52.321340    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:47:52.321374    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:47:52.325500    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:47:52.334569    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:47:52.343508    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:52.346908    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:52.346954    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:47:52.351191    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:47:52.360097    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:47:52.369144    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:47:52.372634    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:47:52.372668    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:47:52.377048    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:47:52.385997    6731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:47:52.389485    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 10:47:52.393773    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 10:47:52.398077    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 10:47:52.402284    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 10:47:52.406494    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 10:47:52.410784    6731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 10:47:52.415017    6731 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0819 10:47:52.415077    6731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:47:52.415094    6731 kube-vip.go:115] generating kube-vip config ...
	I0819 10:47:52.415128    6731 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:47:52.428484    6731 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:47:52.428533    6731 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:47:52.428584    6731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:47:52.436426    6731 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:47:52.436471    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:47:52.443594    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:47:52.457212    6731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:47:52.470304    6731 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:47:52.484055    6731 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:47:52.486893    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:47:52.496372    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:52.591931    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:47:52.607116    6731 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:47:52.607291    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:47:52.628710    6731 out.go:177] * Verifying Kubernetes components...
	I0819 10:47:52.670346    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:47:52.783782    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:47:52.798292    6731 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:47:52.798497    6731 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1139f2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:47:52.798536    6731 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:47:52.798707    6731 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:47:52.798781    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:47:52.798786    6731 round_trippers.go:469] Request Headers:
	I0819 10:47:52.798795    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:47:52.798799    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.294663    6731 round_trippers.go:574] Response Status: 200 OK in 8495 milliseconds
	I0819 10:48:01.295619    6731 node_ready.go:49] node "ha-431000-m02" has status "Ready":"True"
	I0819 10:48:01.295631    6731 node_ready.go:38] duration metric: took 8.496725269s for node "ha-431000-m02" to be "Ready" ...
	I0819 10:48:01.295639    6731 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:48:01.295675    6731 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 10:48:01.295684    6731 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 10:48:01.295719    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:01.295725    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.295731    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.295738    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.330440    6731 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0819 10:48:01.337354    6731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.337421    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hr2qx
	I0819 10:48:01.337427    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.337433    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.337437    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.341316    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.341771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.341778    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.341784    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.341787    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.348506    6731 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 10:48:01.348939    6731 pod_ready.go:93] pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.348948    6731 pod_ready.go:82] duration metric: took 11.576417ms for pod "coredns-6f6b679f8f-hr2qx" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.348955    6731 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.349002    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vc76p
	I0819 10:48:01.349009    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.349018    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.349023    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.352838    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.353315    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.353323    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.353329    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.353332    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.359196    6731 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 10:48:01.359534    6731 pod_ready.go:93] pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.359544    6731 pod_ready.go:82] duration metric: took 10.583164ms for pod "coredns-6f6b679f8f-vc76p" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.359550    6731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.359593    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000
	I0819 10:48:01.359598    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.359606    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.359612    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.362788    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.363225    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.363232    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.363240    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.363244    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.367689    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:01.368075    6731 pod_ready.go:93] pod "etcd-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.368086    6731 pod_ready.go:82] duration metric: took 8.530882ms for pod "etcd-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.368092    6731 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.368143    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-431000-m02
	I0819 10:48:01.368148    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.368154    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.368159    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.371432    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.372034    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:01.372042    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.372047    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.372051    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.374444    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.374736    6731 pod_ready.go:93] pod "etcd-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.374746    6731 pod_ready.go:82] duration metric: took 6.6473ms for pod "etcd-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.374762    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.374802    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000
	I0819 10:48:01.374806    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.374812    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.374816    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.377666    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.497544    6731 request.go:632] Waited for 119.461544ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.497628    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:01.497639    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.497644    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.497657    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.500903    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:01.501455    6731 pod_ready.go:93] pod "kube-apiserver-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.501465    6731 pod_ready.go:82] duration metric: took 126.694729ms for pod "kube-apiserver-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.501472    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.696523    6731 request.go:632] Waited for 195.000548ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:48:01.696576    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-431000-m02
	I0819 10:48:01.696581    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.696587    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.696591    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.699558    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.896265    6731 request.go:632] Waited for 196.197674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:01.896299    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:01.896306    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:01.896314    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:01.896318    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:01.898585    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:01.899021    6731 pod_ready.go:93] pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:01.899030    6731 pod_ready.go:82] duration metric: took 397.544864ms for pod "kube-apiserver-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:01.899037    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.096355    6731 request.go:632] Waited for 197.256376ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:48:02.096461    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000
	I0819 10:48:02.096473    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.096484    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.096492    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.100048    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:02.295872    6731 request.go:632] Waited for 195.092018ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:02.295923    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:02.295929    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.295935    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.295938    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.297901    6731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 10:48:02.298170    6731 pod_ready.go:93] pod "kube-controller-manager-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:02.298180    6731 pod_ready.go:82] duration metric: took 399.12914ms for pod "kube-controller-manager-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.298196    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.496479    6731 request.go:632] Waited for 198.200207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:48:02.496532    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-431000-m02
	I0819 10:48:02.496579    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.496595    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.496601    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.500536    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:02.695959    6731 request.go:632] Waited for 194.694484ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:02.696038    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:02.696044    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.696053    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.696059    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.698693    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:02.699259    6731 pod_ready.go:93] pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:02.699268    6731 pod_ready.go:82] duration metric: took 401.059351ms for pod "kube-controller-manager-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.699282    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2fn5w" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:02.895886    6731 request.go:632] Waited for 196.554773ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2fn5w
	I0819 10:48:02.895937    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2fn5w
	I0819 10:48:02.895943    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:02.895949    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:02.895952    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:02.898485    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:03.097015    6731 request.go:632] Waited for 197.927938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m04
	I0819 10:48:03.097110    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m04
	I0819 10:48:03.097121    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.097133    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.097139    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.100422    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:03.100848    6731 pod_ready.go:93] pod "kube-proxy-2fn5w" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:03.100861    6731 pod_ready.go:82] duration metric: took 401.564872ms for pod "kube-proxy-2fn5w" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.100870    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.297507    6731 request.go:632] Waited for 196.572896ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:48:03.297595    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5h7j2
	I0819 10:48:03.297605    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.297617    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.297628    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.300868    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:03.497170    6731 request.go:632] Waited for 195.491118ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:03.497222    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:03.497231    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.497243    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.497254    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.500591    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:03.501004    6731 pod_ready.go:98] node "ha-431000-m02" hosting pod "kube-proxy-5h7j2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:03.501017    6731 pod_ready.go:82] duration metric: took 400.132303ms for pod "kube-proxy-5h7j2" in "kube-system" namespace to be "Ready" ...
	E0819 10:48:03.501025    6731 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-431000-m02" hosting pod "kube-proxy-5h7j2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:03.501032    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.696124    6731 request.go:632] Waited for 195.010851ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:48:03.696172    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5l56s
	I0819 10:48:03.696179    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.696218    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.696226    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.699032    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:03.895964    6731 request.go:632] Waited for 196.576431ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:03.896021    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:03.896029    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:03.896037    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:03.896043    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:03.898534    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:03.898926    6731 pod_ready.go:93] pod "kube-proxy-5l56s" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:03.898935    6731 pod_ready.go:82] duration metric: took 397.887553ms for pod "kube-proxy-5l56s" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:03.898942    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:04.096184    6731 request.go:632] Waited for 197.190491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:48:04.096246    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000
	I0819 10:48:04.096256    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.096269    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.096277    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.099213    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:04.297318    6731 request.go:632] Waited for 197.526248ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:04.297394    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000
	I0819 10:48:04.297404    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.297415    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.297424    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.301350    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:04.301819    6731 pod_ready.go:93] pod "kube-scheduler-ha-431000" in "kube-system" namespace has status "Ready":"True"
	I0819 10:48:04.301828    6731 pod_ready.go:82] duration metric: took 402.870121ms for pod "kube-scheduler-ha-431000" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:04.301835    6731 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	I0819 10:48:04.495992    6731 request.go:632] Waited for 194.108051ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:48:04.496068    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-431000-m02
	I0819 10:48:04.496077    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.496087    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.496094    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.499407    6731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 10:48:04.696474    6731 request.go:632] Waited for 196.428196ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:04.696569    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m02
	I0819 10:48:04.696581    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.696595    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.696602    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.699405    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:04.699912    6731 pod_ready.go:98] node "ha-431000-m02" hosting pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:04.699926    6731 pod_ready.go:82] duration metric: took 398.076795ms for pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace to be "Ready" ...
	E0819 10:48:04.699934    6731 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-431000-m02" hosting pod "kube-scheduler-ha-431000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-431000-m02" has status "Ready":"False"
	I0819 10:48:04.699945    6731 pod_ready.go:39] duration metric: took 3.404223088s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 10:48:04.699963    6731 api_server.go:52] waiting for apiserver process to appear ...
	I0819 10:48:04.700028    6731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 10:48:04.711937    6731 api_server.go:72] duration metric: took 12.104535169s to wait for apiserver process to appear ...
	I0819 10:48:04.711948    6731 api_server.go:88] waiting for apiserver healthz status ...
	I0819 10:48:04.711964    6731 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0819 10:48:04.714976    6731 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0819 10:48:04.715016    6731 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0819 10:48:04.715022    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.715028    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.715032    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.715515    6731 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 10:48:04.715659    6731 api_server.go:141] control plane version: v1.31.0
	I0819 10:48:04.715671    6731 api_server.go:131] duration metric: took 3.718718ms to wait for apiserver health ...
	I0819 10:48:04.715676    6731 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 10:48:04.896062    6731 request.go:632] Waited for 180.330037ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:04.896138    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:04.896149    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:04.896159    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:04.896167    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:04.900885    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:04.904876    6731 system_pods.go:59] 19 kube-system pods found
	I0819 10:48:04.904891    6731 system_pods.go:61] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:48:04.904896    6731 system_pods.go:61] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:48:04.904899    6731 system_pods.go:61] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:48:04.904902    6731 system_pods.go:61] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:48:04.904905    6731 system_pods.go:61] "kindnet-kcrzx" [4d8e74ea-456c-476b-951f-c880eb642788] Running
	I0819 10:48:04.904908    6731 system_pods.go:61] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:48:04.904911    6731 system_pods.go:61] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:48:04.904914    6731 system_pods.go:61] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:48:04.904916    6731 system_pods.go:61] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:48:04.904919    6731 system_pods.go:61] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:48:04.904922    6731 system_pods.go:61] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:48:04.904925    6731 system_pods.go:61] "kube-proxy-2fn5w" [bca1b722-fe85-4f4b-a536-8228357812a4] Running
	I0819 10:48:04.904927    6731 system_pods.go:61] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:48:04.904930    6731 system_pods.go:61] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:48:04.904933    6731 system_pods.go:61] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:48:04.904935    6731 system_pods.go:61] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:48:04.904938    6731 system_pods.go:61] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:48:04.904940    6731 system_pods.go:61] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:48:04.904955    6731 system_pods.go:61] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:48:04.904964    6731 system_pods.go:74] duration metric: took 189.278663ms to wait for pod list to return data ...
	I0819 10:48:04.904971    6731 default_sa.go:34] waiting for default service account to be created ...
	I0819 10:48:05.096767    6731 request.go:632] Waited for 191.735215ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:48:05.096807    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0819 10:48:05.096813    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:05.096824    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:05.096848    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:05.099644    6731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 10:48:05.099783    6731 default_sa.go:45] found service account: "default"
	I0819 10:48:05.099793    6731 default_sa.go:55] duration metric: took 194.813501ms for default service account to be created ...
	I0819 10:48:05.099798    6731 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 10:48:05.296235    6731 request.go:632] Waited for 196.389305ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:05.296338    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0819 10:48:05.296351    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:05.296362    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:05.296370    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:05.300491    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:05.304610    6731 system_pods.go:86] 19 kube-system pods found
	I0819 10:48:05.304622    6731 system_pods.go:89] "coredns-6f6b679f8f-hr2qx" [625d8978-9556-45d9-a09a-f94be2492a2b] Running
	I0819 10:48:05.304626    6731 system_pods.go:89] "coredns-6f6b679f8f-vc76p" [dcdfebee-b458-4811-acd1-03eed5ffb5a7] Running
	I0819 10:48:05.304629    6731 system_pods.go:89] "etcd-ha-431000" [e98fabd3-a6c2-4483-9de6-ea242c6c7af6] Running
	I0819 10:48:05.304631    6731 system_pods.go:89] "etcd-ha-431000-m02" [1747c93b-a041-4419-b664-45170979e6c3] Running
	I0819 10:48:05.304634    6731 system_pods.go:89] "kindnet-kcrzx" [4d8e74ea-456c-476b-951f-c880eb642788] Running
	I0819 10:48:05.304636    6731 system_pods.go:89] "kindnet-lvdbg" [d8f9a076-8fd4-4f1c-88ed-2472a0ae22b2] Running
	I0819 10:48:05.304639    6731 system_pods.go:89] "kindnet-qmgqd" [f0609613-9015-439f-a60f-a92adc0b073b] Running
	I0819 10:48:05.304641    6731 system_pods.go:89] "kube-apiserver-ha-431000" [ae3ea813-f65f-4628-b835-46f36ece40cb] Running
	I0819 10:48:05.304644    6731 system_pods.go:89] "kube-apiserver-ha-431000-m02" [a0c86020-8c65-44ba-ae68-6c270d61c16c] Running
	I0819 10:48:05.304646    6731 system_pods.go:89] "kube-controller-manager-ha-431000" [a0421f18-d701-4745-8db1-42dc9f5f41b9] Running
	I0819 10:48:05.304652    6731 system_pods.go:89] "kube-controller-manager-ha-431000-m02" [43a2ecfb-e22f-44bc-a2b8-2f318d04ad62] Running
	I0819 10:48:05.304655    6731 system_pods.go:89] "kube-proxy-2fn5w" [bca1b722-fe85-4f4b-a536-8228357812a4] Running
	I0819 10:48:05.304658    6731 system_pods.go:89] "kube-proxy-5h7j2" [6b44fae4-8003-4934-b770-f0c3474f2369] Running
	I0819 10:48:05.304660    6731 system_pods.go:89] "kube-proxy-5l56s" [6f1461cf-fbf8-4958-bb9f-f4b6c8c666f4] Running
	I0819 10:48:05.304663    6731 system_pods.go:89] "kube-scheduler-ha-431000" [d0e14d90-c91b-4206-9b95-21831eaa2d5f] Running
	I0819 10:48:05.304666    6731 system_pods.go:89] "kube-scheduler-ha-431000-m02" [c3e4c63d-8611-406f-aa0b-7efe2940e1f6] Running
	I0819 10:48:05.304670    6731 system_pods.go:89] "kube-vip-ha-431000" [e9f1fcdc-34a1-45c8-87eb-dcb5028483b1] Running
	I0819 10:48:05.304673    6731 system_pods.go:89] "kube-vip-ha-431000-m02" [416d4542-188e-44bf-a272-f2bce97de1a2] Running
	I0819 10:48:05.304675    6731 system_pods.go:89] "storage-provisioner" [e68070ef-bdea-45e6-b7a8-8834534fa616] Running
	I0819 10:48:05.304679    6731 system_pods.go:126] duration metric: took 204.873114ms to wait for k8s-apps to be running ...
	I0819 10:48:05.304689    6731 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 10:48:05.304743    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:48:05.315748    6731 system_svc.go:56] duration metric: took 11.056169ms WaitForService to wait for kubelet
	I0819 10:48:05.315761    6731 kubeadm.go:582] duration metric: took 12.708349079s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:48:05.315777    6731 node_conditions.go:102] verifying NodePressure condition ...
	I0819 10:48:05.496283    6731 request.go:632] Waited for 180.435074ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0819 10:48:05.496409    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0819 10:48:05.496422    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:05.496434    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:05.496442    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:05.500479    6731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 10:48:05.501183    6731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:48:05.501199    6731 node_conditions.go:123] node cpu capacity is 2
	I0819 10:48:05.501209    6731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:48:05.501213    6731 node_conditions.go:123] node cpu capacity is 2
	I0819 10:48:05.501217    6731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 10:48:05.501220    6731 node_conditions.go:123] node cpu capacity is 2
	I0819 10:48:05.501224    6731 node_conditions.go:105] duration metric: took 185.438997ms to run NodePressure ...
	I0819 10:48:05.501232    6731 start.go:241] waiting for startup goroutines ...
	I0819 10:48:05.501250    6731 start.go:255] writing updated cluster config ...
	I0819 10:48:05.523466    6731 out.go:201] 
	I0819 10:48:05.560623    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:05.560698    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:48:05.598433    6731 out.go:177] * Starting "ha-431000-m03" control-plane node in "ha-431000" cluster
	I0819 10:48:05.673302    6731 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:48:05.673330    6731 cache.go:56] Caching tarball of preloaded images
	I0819 10:48:05.673481    6731 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:48:05.673495    6731 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:48:05.673583    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:48:05.674126    6731 start.go:360] acquireMachinesLock for ha-431000-m03: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:48:05.674196    6731 start.go:364] duration metric: took 53.173µs to acquireMachinesLock for "ha-431000-m03"
	I0819 10:48:05.674214    6731 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:48:05.674220    6731 fix.go:54] fixHost starting: m03
	I0819 10:48:05.674532    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:48:05.674564    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:48:05.684031    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52071
	I0819 10:48:05.684387    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:48:05.684730    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:48:05.684748    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:48:05.684970    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:48:05.685096    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:05.685184    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:48:05.685314    6731 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:05.685417    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 4921
	I0819 10:48:05.686356    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid 4921 missing from process table
	I0819 10:48:05.686393    6731 fix.go:112] recreateIfNeeded on ha-431000-m03: state=Stopped err=<nil>
	I0819 10:48:05.686403    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	W0819 10:48:05.686488    6731 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:48:05.707556    6731 out.go:177] * Restarting existing hyperkit VM for "ha-431000-m03" ...
	I0819 10:48:05.749205    6731 main.go:141] libmachine: (ha-431000-m03) Calling .Start
	I0819 10:48:05.749457    6731 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:05.749508    6731 main.go:141] libmachine: (ha-431000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid
	I0819 10:48:05.750891    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid 4921 missing from process table
	I0819 10:48:05.750907    6731 main.go:141] libmachine: (ha-431000-m03) DBG | pid 4921 is in state "Stopped"
	I0819 10:48:05.750937    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid...
	I0819 10:48:05.751980    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Using UUID e29829ac-8e18-4202-b85c-7ebcba6c4b47
	I0819 10:48:05.783917    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Generated MAC f6:29:ff:43:e4:63
	I0819 10:48:05.783944    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:48:05.784089    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00039adb0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:48:05.784126    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e29829ac-8e18-4202-b85c-7ebcba6c4b47", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00039adb0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:48:05.784162    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e29829ac-8e18-4202-b85c-7ebcba6c4b47", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:48:05.784200    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e29829ac-8e18-4202-b85c-7ebcba6c4b47 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/ha-431000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:48:05.784218    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:48:05.786149    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 DEBUG: hyperkit: Pid is 6801
	I0819 10:48:05.786682    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Attempt 0
	I0819 10:48:05.786725    6731 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:05.786782    6731 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 6801
	I0819 10:48:05.789082    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Searching for f6:29:ff:43:e4:63 in /var/db/dhcpd_leases ...
	I0819 10:48:05.789187    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:48:05.789247    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d6bf}
	I0819 10:48:05.789282    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d6ab}
	I0819 10:48:05.789327    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetConfigRaw
	I0819 10:48:05.789331    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ea:1c:f6:2b:4f:18 ID:1,ea:1c:f6:2b:4f:18 Lease:0x66c4d578}
	I0819 10:48:05.789394    6731 main.go:141] libmachine: (ha-431000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c4d268}
	I0819 10:48:05.789432    6731 main.go:141] libmachine: (ha-431000-m03) DBG | Found match: f6:29:ff:43:e4:63
	I0819 10:48:05.789457    6731 main.go:141] libmachine: (ha-431000-m03) DBG | IP: 192.169.0.7
	I0819 10:48:05.790573    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:05.790831    6731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:48:05.791509    6731 machine.go:93] provisionDockerMachine start ...
	I0819 10:48:05.791526    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:05.791708    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:05.791856    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:05.791989    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:05.792106    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:05.792233    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:05.792391    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:05.792718    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:05.792736    6731 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:48:05.795522    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:48:05.805645    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:48:05.807213    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:48:05.807239    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:48:05.807263    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:48:05.807280    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:48:06.196775    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:48:06.196792    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:48:06.311674    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:48:06.311699    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:48:06.311708    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:48:06.311716    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:48:06.312485    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:48:06.312497    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:48:11.891105    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:48:11.891118    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:48:11.891126    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:48:11.914412    6731 main.go:141] libmachine: (ha-431000-m03) DBG | 2024/08/19 10:48:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:48:40.850746    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:48:40.850774    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:48:40.850923    6731 buildroot.go:166] provisioning hostname "ha-431000-m03"
	I0819 10:48:40.850935    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:48:40.851109    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:40.851215    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:40.851319    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.851447    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.851565    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:40.851724    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:40.851884    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:40.851892    6731 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m03 && echo "ha-431000-m03" | sudo tee /etc/hostname
	I0819 10:48:40.912350    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m03
	
	I0819 10:48:40.912364    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:40.912505    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:40.912602    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.912691    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:40.912785    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:40.912908    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:40.913053    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:40.913064    6731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:48:40.968529    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:48:40.968544    6731 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:48:40.968564    6731 buildroot.go:174] setting up certificates
	I0819 10:48:40.968572    6731 provision.go:84] configureAuth start
	I0819 10:48:40.968583    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetMachineName
	I0819 10:48:40.968727    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:40.968824    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:40.968927    6731 provision.go:143] copyHostCerts
	I0819 10:48:40.968955    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:48:40.969005    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:48:40.969014    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:48:40.969148    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:48:40.969352    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:48:40.969382    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:48:40.969386    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:48:40.969454    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:48:40.969597    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:48:40.969626    6731 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:48:40.969631    6731 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:48:40.969728    6731 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:48:40.969875    6731 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m03 san=[127.0.0.1 192.169.0.7 ha-431000-m03 localhost minikube]
	I0819 10:48:41.057829    6731 provision.go:177] copyRemoteCerts
	I0819 10:48:41.057874    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:48:41.057888    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.058026    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.058130    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.058224    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.058305    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:41.091148    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:48:41.091220    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:48:41.111177    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:48:41.111249    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:48:41.131169    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:48:41.131232    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 10:48:41.150507    6731 provision.go:87] duration metric: took 181.923979ms to configureAuth
	I0819 10:48:41.150522    6731 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:48:41.150698    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:41.150712    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:41.150863    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.150946    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.151038    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.151126    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.151222    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.151342    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:41.151471    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:41.151478    6731 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:48:41.202400    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:48:41.202413    6731 buildroot.go:70] root file system type: tmpfs
	I0819 10:48:41.202505    6731 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:48:41.202518    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.202705    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.202819    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.202905    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.202997    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.203153    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:41.203294    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:41.203341    6731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:48:41.264039    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:48:41.264057    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:41.264193    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:41.264267    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.264354    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:41.264447    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:41.264565    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:41.264712    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:41.264724    6731 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:48:42.813749    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:48:42.813763    6731 machine.go:96] duration metric: took 37.021449642s to provisionDockerMachine
	I0819 10:48:42.813771    6731 start.go:293] postStartSetup for "ha-431000-m03" (driver="hyperkit")
	I0819 10:48:42.813778    6731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:48:42.813796    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:42.813978    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:48:42.813990    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:42.814079    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:42.814168    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.814251    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:42.814339    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:42.847285    6731 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:48:42.850702    6731 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:48:42.850716    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:48:42.850802    6731 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:48:42.850961    6731 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:48:42.850968    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:48:42.851143    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:48:42.859533    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:48:42.879757    6731 start.go:296] duration metric: took 65.975651ms for postStartSetup
	I0819 10:48:42.879780    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:42.879958    6731 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:48:42.879970    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:42.880059    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:42.880147    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.880225    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:42.880299    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:42.912892    6731 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:48:42.912952    6731 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:48:42.966028    6731 fix.go:56] duration metric: took 37.291003007s for fixHost
	I0819 10:48:42.966067    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:42.966300    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:42.966470    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.966677    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:42.966842    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:42.967014    6731 main.go:141] libmachine: Using SSH client type: native
	I0819 10:48:42.967198    6731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfce5ea0] 0xfce8c00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0819 10:48:42.967209    6731 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:48:43.017214    6731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089722.809914885
	
	I0819 10:48:43.017227    6731 fix.go:216] guest clock: 1724089722.809914885
	I0819 10:48:43.017238    6731 fix.go:229] Guest: 2024-08-19 10:48:42.809914885 -0700 PDT Remote: 2024-08-19 10:48:42.966051 -0700 PDT m=+90.012694037 (delta=-156.136115ms)
	I0819 10:48:43.017249    6731 fix.go:200] guest clock delta is within tolerance: -156.136115ms
	I0819 10:48:43.017253    6731 start.go:83] releasing machines lock for "ha-431000-m03", held for 37.342247723s
	I0819 10:48:43.017267    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.017412    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:43.053981    6731 out.go:177] * Found network options:
	I0819 10:48:43.129066    6731 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0819 10:48:43.183072    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:48:43.183105    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:48:43.183124    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.183855    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.184015    6731 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0819 10:48:43.184100    6731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:48:43.184137    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	W0819 10:48:43.184239    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 10:48:43.184256    6731 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:48:43.184293    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:43.184321    6731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:48:43.184333    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0819 10:48:43.184497    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:43.184513    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0819 10:48:43.184663    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:43.184689    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0819 10:48:43.184810    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0819 10:48:43.184822    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0819 10:48:43.184959    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0819 10:48:43.213583    6731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:48:43.213642    6731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:48:43.260969    6731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:48:43.260991    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:48:43.261093    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:48:43.276683    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:48:43.284995    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:48:43.293374    6731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:48:43.293418    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:48:43.301652    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:48:43.309897    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:48:43.318705    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:48:43.326972    6731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:48:43.335390    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:48:43.343887    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:48:43.352357    6731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:48:43.360984    6731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:48:43.368494    6731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:48:43.376120    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:43.467265    6731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:48:43.484775    6731 start.go:495] detecting cgroup driver to use...
	I0819 10:48:43.484846    6731 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:48:43.497091    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:48:43.508193    6731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:48:43.523755    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:48:43.534687    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:48:43.544926    6731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:48:43.565401    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:48:43.578088    6731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:48:43.593104    6731 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:48:43.595950    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:48:43.603348    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:48:43.617225    6731 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:48:43.708564    6731 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:48:43.826974    6731 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:48:43.827000    6731 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:48:43.840921    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:43.931831    6731 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:48:46.156257    6731 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.224358944s)
	I0819 10:48:46.156321    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:48:46.167537    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:48:46.177508    6731 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:48:46.275371    6731 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:48:46.384348    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:46.481007    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:48:46.494577    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:48:46.505747    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:46.597531    6731 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:48:46.653351    6731 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:48:46.653427    6731 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:48:46.657670    6731 start.go:563] Will wait 60s for crictl version
	I0819 10:48:46.657717    6731 ssh_runner.go:195] Run: which crictl
	I0819 10:48:46.660938    6731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:48:46.686761    6731 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:48:46.686832    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:48:46.704526    6731 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:48:46.743134    6731 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:48:46.784818    6731 out.go:177]   - env NO_PROXY=192.169.0.5
	I0819 10:48:46.805951    6731 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0819 10:48:46.827168    6731 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0819 10:48:46.827576    6731 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:48:46.832299    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:48:46.842314    6731 mustload.go:65] Loading cluster: ha-431000
	I0819 10:48:46.842487    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:46.842703    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:48:46.842725    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:48:46.851523    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52093
	I0819 10:48:46.851853    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:48:46.852189    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:48:46.852199    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:48:46.852392    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:48:46.852498    6731 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:48:46.852572    6731 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:48:46.852653    6731 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:48:46.853627    6731 host.go:66] Checking if "ha-431000" exists ...
	I0819 10:48:46.853864    6731 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:48:46.853886    6731 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:48:46.862538    6731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52095
	I0819 10:48:46.862891    6731 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:48:46.863218    6731 main.go:141] libmachine: Using API Version  1
	I0819 10:48:46.863228    6731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:48:46.863493    6731 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:48:46.863609    6731 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:48:46.863718    6731 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.7
	I0819 10:48:46.863725    6731 certs.go:194] generating shared ca certs ...
	I0819 10:48:46.863739    6731 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:46.863891    6731 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:48:46.863952    6731 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:48:46.863961    6731 certs.go:256] generating profile certs ...
	I0819 10:48:46.864059    6731 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:48:46.864084    6731 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc
	I0819 10:48:46.864099    6731 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0819 10:48:47.115702    6731 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc ...
	I0819 10:48:47.115728    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc: {Name:mk546bf47d8f9536a5f5b6d4554be985cbd51530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:47.116053    6731 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc ...
	I0819 10:48:47.116065    6731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc: {Name:mk7e6a2c85fe835844cf7f3435ab2787264953bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:48:47.116272    6731 certs.go:381] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt.bd7e22bc -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt
	I0819 10:48:47.116477    6731 certs.go:385] copying /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.bd7e22bc -> /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key
	I0819 10:48:47.116689    6731 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:48:47.116699    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:48:47.116720    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:48:47.116739    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:48:47.116757    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:48:47.116776    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:48:47.116795    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:48:47.116812    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:48:47.116829    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:48:47.116905    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:48:47.116938    6731 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:48:47.116947    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:48:47.116979    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:48:47.117007    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:48:47.117035    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:48:47.117102    6731 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:48:47.117135    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.117157    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.117176    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.117208    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:48:47.117346    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:48:47.117436    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:48:47.117536    6731 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:48:47.117615    6731 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:48:47.142966    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 10:48:47.147073    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 10:48:47.155318    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 10:48:47.158461    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0819 10:48:47.166659    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 10:48:47.169909    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 10:48:47.178109    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 10:48:47.181265    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 10:48:47.189483    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 10:48:47.192613    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 10:48:47.201555    6731 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 10:48:47.205119    6731 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 10:48:47.213152    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:48:47.233357    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:48:47.253373    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:48:47.273621    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:48:47.293620    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 10:48:47.313508    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 10:48:47.333626    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:48:47.353462    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:48:47.373370    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:48:47.393215    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:48:47.412732    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:48:47.432601    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 10:48:47.446319    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0819 10:48:47.460225    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 10:48:47.473780    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 10:48:47.487357    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 10:48:47.501097    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 10:48:47.514700    6731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 10:48:47.528522    6731 ssh_runner.go:195] Run: openssl version
	I0819 10:48:47.532949    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:48:47.541688    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.545076    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.545117    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:48:47.549433    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:48:47.558033    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:48:47.566686    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.570522    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.570574    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:48:47.574909    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:48:47.583535    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:48:47.592184    6731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.595867    6731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.595904    6731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:48:47.600346    6731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:48:47.609333    6731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:48:47.612588    6731 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 10:48:47.612626    6731 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0819 10:48:47.612672    6731 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:48:47.612693    6731 kube-vip.go:115] generating kube-vip config ...
	I0819 10:48:47.612723    6731 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:48:47.627870    6731 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:48:47.627924    6731 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:48:47.627976    6731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:48:47.636973    6731 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 10:48:47.637024    6731 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 10:48:47.646020    6731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 10:48:47.646020    6731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 10:48:47.646020    6731 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 10:48:47.646038    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:48:47.646059    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:48:47.646062    6731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 10:48:47.646121    6731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 10:48:47.646172    6731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 10:48:47.660116    6731 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:48:47.660157    6731 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 10:48:47.660183    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 10:48:47.660208    6731 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 10:48:47.660226    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 10:48:47.660248    6731 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 10:48:47.673769    6731 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 10:48:47.673805    6731 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 10:48:48.141691    6731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 10:48:48.149459    6731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0819 10:48:48.162963    6731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:48:48.176379    6731 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:48:48.189896    6731 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:48:48.192847    6731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:48:48.202768    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:48.297576    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:48:48.315324    6731 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:48:48.315508    6731 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:48:48.336018    6731 out.go:177] * Verifying Kubernetes components...
	I0819 10:48:48.356514    6731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:48:48.452232    6731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:48:49.049566    6731 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:48:49.049773    6731 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1139f2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 10:48:49.049811    6731 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0819 10:48:49.049986    6731 node_ready.go:35] waiting up to 6m0s for node "ha-431000-m03" to be "Ready" ...
	I0819 10:48:49.050026    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:49.050031    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:49.050044    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:49.050049    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:49.052182    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:49.550380    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:49.550401    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:49.550412    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:49.550420    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:49.553469    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:50.050836    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:50.050856    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:50.050867    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:50.050872    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:50.053828    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:50.551275    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:50.551290    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:50.551297    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:50.551299    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:50.553247    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:48:51.051126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:51.051149    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:51.051161    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:51.051169    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:51.054487    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:51.054565    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:51.550751    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:51.550764    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:51.550770    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:51.550773    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:51.554094    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:52.051808    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:52.051848    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:52.051857    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:52.051864    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:52.054405    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:52.551111    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:52.551135    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:52.551147    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:52.551153    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:52.554177    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:53.050562    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:53.050577    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:53.050584    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:53.050587    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:53.052361    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:48:53.550771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:53.550787    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:53.550794    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:53.550798    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:53.553283    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:53.553380    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:54.051356    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:54.051428    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:54.051441    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:54.051447    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:54.054348    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:54.551004    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:54.551020    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:54.551026    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:54.551030    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:54.553045    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:55.051095    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:55.051142    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:55.051152    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:55.051157    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:55.053428    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:55.550441    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:55.550460    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:55.550470    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:55.550475    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:55.553606    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:55.553707    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:56.050952    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:56.050966    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:56.050973    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:56.050976    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:56.052832    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:48:56.551392    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:56.551413    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:56.551441    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:56.551446    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:56.553734    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:57.051356    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:57.051377    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:57.051388    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:57.051396    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:57.054556    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:57.551010    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:57.551030    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:57.551041    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:57.551047    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:57.553839    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:57.553945    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:48:58.050877    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:58.050892    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:58.050900    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:58.050903    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:58.053207    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:48:58.551669    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:58.551688    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:58.551699    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:58.551707    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:58.554730    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:59.050796    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:59.050819    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:59.050830    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:59.050835    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:59.054088    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:59.550718    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:48:59.550737    6731 round_trippers.go:469] Request Headers:
	I0819 10:48:59.550749    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:48:59.550756    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:48:59.553970    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:48:59.554048    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:00.052097    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:00.052120    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:00.052167    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:00.052198    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:00.055063    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:00.550744    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:00.550766    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:00.550776    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:00.550782    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:00.553834    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:01.051854    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:01.051873    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:01.051885    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:01.051892    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:01.055031    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:01.551302    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:01.551323    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:01.551335    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:01.551343    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:01.554596    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:01.554668    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:02.050920    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:02.050940    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:02.050958    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:02.050975    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:02.053736    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:02.552196    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:02.552230    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:02.552237    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:02.552240    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:02.554641    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:03.050838    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:03.050857    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:03.050868    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:03.050873    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:03.054125    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:03.550771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:03.550785    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:03.550794    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:03.550798    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:03.552910    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:04.052575    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:04.052595    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:04.052607    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:04.052621    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:04.055636    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:04.055705    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:04.552223    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:04.552242    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:04.552253    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:04.552259    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:04.555524    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:05.052550    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:05.052574    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:05.052588    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:05.052610    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:05.054909    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:05.552550    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:05.552568    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:05.552577    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:05.552581    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:05.556192    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:06.051290    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:06.051305    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:06.051311    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:06.051315    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:06.052929    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:06.550946    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:06.550969    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:06.550981    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:06.550989    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:06.565463    6731 round_trippers.go:574] Response Status: 404 Not Found in 14 milliseconds
	I0819 10:49:06.565539    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:07.051724    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:07.051792    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:07.051806    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:07.051822    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:07.054638    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:07.552559    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:07.552575    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:07.552583    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:07.552587    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:07.558906    6731 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0819 10:49:08.051983    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:08.052011    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:08.052048    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:08.052057    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:08.055151    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:08.550667    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:08.550693    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:08.550735    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:08.550750    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:08.553804    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:09.052706    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:09.052731    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:09.052776    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:09.052784    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:09.055712    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:09.055781    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:09.551599    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:09.551615    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:09.551624    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:09.551630    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:09.555183    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:10.050631    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:10.050657    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:10.050669    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:10.050674    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:10.054985    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:49:10.551126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:10.551137    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:10.551143    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:10.551146    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:10.553249    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:11.052626    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:11.052644    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:11.052651    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:11.052656    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:11.055384    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:11.550711    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:11.550725    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:11.550729    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:11.550733    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:11.554398    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:11.554509    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:12.051859    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:12.051884    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:12.051924    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:12.051934    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:12.055082    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:12.551161    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:12.551173    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:12.551179    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:12.551183    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:12.553279    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:13.051549    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:13.051610    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:13.051621    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:13.051628    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:13.054867    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:13.551864    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:13.551878    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:13.551884    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:13.551889    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:13.555066    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:13.555140    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:14.052199    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:14.052217    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:14.052223    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:14.052226    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:14.054562    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:14.551764    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:14.551790    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:14.551801    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:14.551807    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:14.555310    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:15.052223    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:15.052279    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:15.052293    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:15.052299    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:15.055796    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:15.550718    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:15.550733    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:15.550759    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:15.550766    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:15.554217    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:16.052643    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:16.052670    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:16.052716    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:16.052724    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:16.056008    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:16.056083    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:16.551933    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:16.551956    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:16.551968    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:16.551974    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:16.555280    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:17.051987    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:17.052008    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:17.052018    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:17.052025    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:17.055318    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:17.551734    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:17.551746    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:17.551751    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:17.551754    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:17.553654    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:18.050867    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:18.050886    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:18.050899    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:18.050904    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:18.053425    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:18.551523    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:18.551543    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:18.551551    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:18.551557    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:18.554279    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:18.554345    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:19.051204    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:19.051234    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:19.051246    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:19.051252    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:19.054668    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:19.552430    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:19.552449    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:19.552455    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:19.552460    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:19.554479    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:20.050892    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:20.050918    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:20.050930    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:20.050943    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:20.054172    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:20.552143    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:20.552182    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:20.552192    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:20.552198    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:20.554611    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:20.554681    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:21.051321    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:21.051347    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:21.051390    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:21.051401    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:21.054431    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:21.552828    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:21.552891    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:21.552901    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:21.552906    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:21.555366    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:22.051105    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:22.051128    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:22.051140    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:22.051146    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:22.054457    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:22.551053    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:22.551070    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:22.551078    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:22.551081    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:22.553091    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:23.051049    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:23.051073    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:23.051085    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:23.051092    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:23.054116    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:23.054269    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:23.551400    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:23.551419    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:23.551427    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:23.551429    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:23.556948    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:49:24.051531    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:24.051549    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:24.051561    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:24.051569    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:24.054942    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:24.551524    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:24.551548    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:24.551559    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:24.551565    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:24.554301    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:25.050993    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:25.051013    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:25.051022    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:25.051026    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:25.053462    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:25.551254    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:25.551269    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:25.551277    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:25.551283    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:25.553516    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:25.553584    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:26.051047    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:26.051070    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:26.051081    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:26.051095    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:26.053722    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:26.552294    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:26.552315    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:26.552326    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:26.552333    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:26.555323    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:27.051500    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:27.051522    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:27.051570    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:27.051580    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:27.054761    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:27.552023    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:27.552067    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:27.552074    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:27.552076    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:27.554045    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:27.554105    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:28.051012    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:28.051068    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:28.051080    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:28.051091    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:28.053095    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:28.553091    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:28.553112    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:28.553123    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:28.553130    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:28.556091    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:29.051557    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:29.051582    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:29.051593    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:29.051606    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:29.055042    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:29.551292    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:29.551307    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:29.551313    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:29.551315    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:29.553314    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:30.051884    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:30.051917    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:30.051955    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:30.051962    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:30.055200    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:30.055279    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:30.551827    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:30.551854    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:30.551865    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:30.551873    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:30.555019    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:31.051813    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:31.051841    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:31.051852    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:31.051859    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:31.054944    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:31.551163    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:31.551184    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:31.551194    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:31.551200    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:31.553888    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:32.051783    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:32.051819    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:32.051832    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:32.051840    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:32.054547    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:32.552296    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:32.552350    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:32.552364    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:32.552371    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:32.555225    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:32.555300    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:33.052924    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:33.052939    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:33.052947    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:33.052952    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:33.054987    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:33.551522    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:33.551541    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:33.551549    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:33.551553    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:33.554655    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:34.052385    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:34.052434    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:34.052446    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:34.052454    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:34.055087    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:34.551264    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:34.551281    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:34.551289    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:34.551294    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:34.553737    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:35.051346    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:35.051367    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:35.051378    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:35.051386    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:35.054339    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:35.054443    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:35.552208    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:35.552226    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:35.552233    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:35.552237    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:35.554511    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:36.051189    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:36.051204    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:36.051212    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:36.051216    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:36.053190    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:36.553334    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:36.553356    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:36.553368    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:36.553374    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:36.556524    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:37.052539    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:37.052561    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:37.052573    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:37.052580    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:37.055836    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:37.055914    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:37.553023    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:37.553043    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:37.553053    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:37.553059    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:37.556810    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:38.051735    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:38.051757    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:38.051774    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:38.051782    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:38.055061    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:38.552449    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:38.552476    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:38.552487    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:38.552492    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:38.555685    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:39.051387    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:39.051409    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:39.051420    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:39.051425    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:39.054522    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:39.552260    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:39.552285    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:39.552298    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:39.552304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:39.555403    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:39.555495    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:40.051243    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:40.051310    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:40.051324    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:40.051331    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:40.054070    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:40.551873    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:40.551898    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:40.551960    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:40.551969    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:40.554968    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:41.051578    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:41.051606    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:41.051618    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:41.051623    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:41.054807    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:41.551916    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:41.551931    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:41.551943    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:41.551947    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:41.554367    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:42.053217    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:42.053241    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:42.053249    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:42.053255    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:42.056808    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:42.056893    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:42.552774    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:42.552803    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:42.552822    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:42.552882    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:42.556248    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:43.051301    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:43.051316    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:43.051322    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:43.051328    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:43.054036    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:43.553401    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:43.553423    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:43.553434    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:43.553471    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:43.557035    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:44.053457    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:44.053478    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:44.053489    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:44.053496    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:44.056841    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:44.551566    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:44.551590    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:44.551603    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:44.551609    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:44.555416    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:44.555493    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:45.051853    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:45.051879    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:45.051888    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:45.051895    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:45.055040    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:45.553444    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:45.553468    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:45.553515    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:45.553526    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:45.556794    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:46.051786    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:46.051806    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:46.051814    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:46.051832    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:46.053901    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:46.552785    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:46.552817    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:46.552830    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:46.552836    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:46.556083    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:46.556162    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:47.053456    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:47.053482    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:47.053494    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:47.053502    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:47.057009    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:47.553130    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:47.553152    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:47.553164    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:47.553174    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:47.559073    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:49:48.053108    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:48.053134    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:48.053145    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:48.053152    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:48.057067    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:48.552706    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:48.552729    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:48.552739    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:48.552747    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:48.556474    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:48.556559    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:49.051602    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:49.051625    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:49.051637    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:49.051646    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:49.054881    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:49.552627    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:49.552655    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:49.552667    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:49.552674    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:49.556037    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:50.052601    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:50.052618    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:50.052626    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:50.052631    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:50.055469    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:50.552155    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:50.552178    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:50.552190    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:50.552195    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:50.555596    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:51.052878    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:51.052905    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:51.052917    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:51.052922    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:51.056451    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:51.056532    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:51.552110    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:51.552139    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:51.552185    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:51.552195    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:51.555342    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:52.051920    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:52.051944    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:52.051961    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:52.051973    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:52.055723    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:52.551716    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:52.551743    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:52.551753    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:52.551790    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:52.554933    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:53.051908    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:53.051920    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:53.051926    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:53.051930    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:53.053756    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:49:53.552282    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:53.552329    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:53.552340    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:53.552346    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:53.554573    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:53.554662    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:54.052641    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:54.052700    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:54.052714    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:54.052724    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:54.055914    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:54.553424    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:54.553444    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:54.553453    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:54.553461    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:54.556331    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:55.052118    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:55.052139    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:55.052150    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:55.052156    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:55.055406    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:55.552115    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:55.552140    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:55.552153    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:55.552159    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:55.555054    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:49:55.555134    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:56.053229    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:56.053253    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:56.053266    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:56.053274    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:56.056807    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:56.552807    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:56.552829    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:56.552841    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:56.552851    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:56.556291    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:57.052874    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:57.052896    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:57.052908    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:57.052913    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:57.056108    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:57.553670    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:57.553697    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:57.553745    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:57.553758    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:57.557263    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:57.557331    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:49:58.051791    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:58.051817    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:58.051828    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:58.051833    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:58.055250    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:58.552518    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:58.552545    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:58.552556    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:58.552562    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:58.555625    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:59.053863    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:59.053885    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:59.053905    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:59.053914    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:59.057121    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:49:59.553259    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:49:59.553272    6731 round_trippers.go:469] Request Headers:
	I0819 10:49:59.553278    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:49:59.553280    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:49:59.555213    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:00.052041    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:00.052090    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:00.052103    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:00.052110    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:00.054860    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:00.054945    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:00.552587    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:00.552608    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:00.552620    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:00.552626    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:00.555838    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:01.052694    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:01.052721    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:01.052732    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:01.052746    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:01.056070    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:01.553816    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:01.553839    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:01.553855    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:01.553865    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:01.557015    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:02.051783    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:02.051804    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:02.051815    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:02.051821    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:02.055085    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:02.055158    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:02.553062    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:02.553085    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:02.553097    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:02.553105    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:02.556329    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:03.052789    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:03.052811    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:03.052822    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:03.052827    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:03.055899    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:03.553258    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:03.553318    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:03.553331    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:03.553342    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:03.556755    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:04.052379    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:04.052401    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:04.052413    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:04.052420    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:04.056086    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:04.056163    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:04.552058    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:04.552079    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:04.552090    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:04.552097    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:04.554885    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:05.052906    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:05.052929    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:05.052942    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:05.052950    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:05.056201    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:05.551940    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:05.551961    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:05.551987    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:05.552004    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:05.554036    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:06.052760    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:06.052792    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:06.052801    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:06.052805    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:06.055319    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:06.551983    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:06.552008    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:06.552043    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:06.552063    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:06.554797    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:06.554875    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:07.052461    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:07.052481    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:07.052493    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:07.052501    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:07.055206    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:07.553476    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:07.553503    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:07.553555    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:07.553574    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:07.556741    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:08.052214    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:08.052241    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:08.052252    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:08.052258    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:08.055720    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:08.552079    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:08.552098    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:08.552110    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:08.552119    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:08.554790    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:09.054011    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:09.054033    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:09.054043    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:09.054051    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:09.057425    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:09.057563    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:09.553004    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:09.553024    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:09.553034    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:09.553042    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:09.556104    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:10.052832    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:10.052860    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:10.052870    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:10.052878    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:10.060001    6731 round_trippers.go:574] Response Status: 404 Not Found in 7 milliseconds
	I0819 10:50:10.553943    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:10.553967    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:10.553979    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:10.553984    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:10.557026    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:11.052217    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:11.052240    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:11.052251    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:11.052259    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:11.055611    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:11.553180    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:11.553218    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:11.553231    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:11.553237    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:11.556609    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:11.556679    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:12.053209    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:12.053234    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:12.053244    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:12.053260    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:12.056483    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:12.552948    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:12.552974    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:12.553016    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:12.553022    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:12.555995    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:13.054040    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:13.054066    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:13.054078    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:13.054086    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:13.057218    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:13.553331    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:13.553409    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:13.553428    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:13.553434    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:13.556700    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:13.557047    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:14.053359    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:14.053404    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:14.053418    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:14.053425    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:14.056093    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:14.554003    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:14.554020    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:14.554028    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:14.554033    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:14.556621    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:15.052240    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:15.052259    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:15.052267    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:15.052271    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:15.054851    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:15.552210    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:15.552233    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:15.552292    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:15.552296    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:15.554673    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:16.052627    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:16.052651    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:16.052662    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:16.052669    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:16.055859    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:16.055916    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:16.553446    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:16.553469    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:16.553480    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:16.553487    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:16.556493    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:17.052642    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:17.052665    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:17.052676    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:17.052684    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:17.055560    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:17.553327    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:17.553367    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:17.553375    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:17.553380    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:17.555848    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:18.054167    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:18.054195    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:18.054206    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:18.054214    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:18.057363    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:18.057447    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:18.552623    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:18.552664    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:18.552674    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:18.552682    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:18.556056    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:19.052692    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:19.052730    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:19.052738    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:19.052743    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:19.055382    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:19.553527    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:19.553553    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:19.553564    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:19.553602    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:19.557189    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:20.052711    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:20.052733    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:20.052744    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:20.052752    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:20.056398    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:20.552175    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:20.552196    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:20.552209    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:20.552216    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:20.555567    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:20.555628    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:21.054191    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:21.054216    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:21.054227    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:21.054235    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:21.057762    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:21.552794    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:21.552815    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:21.552827    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:21.552832    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:21.556056    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:22.052279    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:22.052315    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:22.052328    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:22.052335    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:22.055613    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:22.553162    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:22.553188    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:22.553232    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:22.553252    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:22.556362    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:22.556431    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:23.054316    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:23.054338    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:23.054350    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:23.054356    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:23.057542    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:23.552232    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:23.552245    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:23.552272    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:23.552280    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:23.553967    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:24.054003    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:24.054026    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:24.054037    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:24.054045    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:24.057299    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:24.552432    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:24.552455    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:24.552469    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:24.552477    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:24.555494    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:25.053013    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:25.053035    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:25.053047    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:25.053052    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:25.056230    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:25.056306    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:25.552539    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:25.552565    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:25.552577    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:25.552615    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:25.555941    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:26.053283    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:26.053298    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:26.053304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:26.053308    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:26.055446    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:26.553408    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:26.553431    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:26.553443    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:26.553450    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:26.556711    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:27.052272    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:27.052292    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:27.052303    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:27.052309    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:27.055283    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:27.553300    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:27.553326    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:27.553337    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:27.553344    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:27.556249    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:27.556320    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:28.052328    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:28.052357    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:28.052369    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:28.052375    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:28.054916    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:28.554421    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:28.554442    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:28.554453    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:28.554461    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:28.557682    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:29.053409    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:29.053426    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:29.053434    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:29.053438    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:29.055745    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:29.552751    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:29.552764    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:29.552769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:29.552771    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:29.554734    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:30.052686    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:30.052706    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:30.052712    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:30.052717    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:30.056887    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:50:30.056971    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:30.552691    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:30.552714    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:30.552725    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:30.552731    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:30.555684    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:31.052415    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:31.052438    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:31.052450    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:31.052456    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:31.054776    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:31.552531    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:31.552556    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:31.552611    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:31.552622    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:31.555322    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:32.053314    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:32.053340    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:32.053351    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:32.053356    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:32.056305    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:32.553594    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:32.553614    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:32.553625    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:32.553632    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:32.556478    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:32.556594    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:33.053039    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:33.053056    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:33.053065    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:33.053071    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:33.055406    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:33.553287    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:33.553306    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:33.553317    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:33.553324    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:33.555646    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:34.053235    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:34.053254    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:34.053262    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:34.053268    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:34.055633    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:34.552665    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:34.552680    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:34.552689    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:34.552693    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:34.554960    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:35.052632    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:35.052653    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:35.052664    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:35.052669    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:35.055247    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:35.055326    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:35.553273    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:35.553297    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:35.553309    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:35.553316    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:35.556601    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:36.052771    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:36.052791    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:36.052803    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:36.052809    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:36.056225    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:36.553576    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:36.553599    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:36.553611    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:36.553618    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:36.556923    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:37.052815    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:37.052842    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:37.052883    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:37.052890    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:37.055843    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:37.055915    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:37.554175    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:37.554196    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:37.554208    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:37.554215    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:37.557673    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:38.052621    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:38.052641    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:38.052652    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:38.052659    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:38.055675    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:38.554585    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:38.554641    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:38.554655    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:38.554663    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:38.558316    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:39.052502    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:39.052557    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:39.052585    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:39.052593    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:39.055843    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:39.553574    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:39.553601    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:39.553612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:39.553650    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:39.557016    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:39.557096    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:40.052628    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:40.052657    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:40.052695    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:40.052721    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:40.055547    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:40.553381    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:40.553406    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:40.553444    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:40.553450    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:40.556591    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:41.053865    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:41.053894    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:41.053906    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:41.053914    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:41.057267    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:41.553609    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:41.553633    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:41.553644    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:41.553652    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:41.556535    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:42.053547    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:42.053575    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:42.053585    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:42.053591    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:42.056838    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:42.056911    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:42.552950    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:42.552967    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:42.552975    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:42.552979    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:42.555606    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:43.054679    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:43.054705    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:43.054716    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:43.054723    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:43.057954    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:43.553147    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:43.553170    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:43.553180    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:43.553187    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:43.556659    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:44.052693    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:44.052712    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:44.052725    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:44.052731    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:44.055591    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:44.553352    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:44.553405    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:44.553418    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:44.553427    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:44.556267    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:44.556423    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:45.052819    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:45.052873    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:45.052887    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:45.052898    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:45.055681    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:45.553717    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:45.553743    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:45.553754    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:45.553760    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:45.557371    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:46.053721    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:46.053741    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:46.053750    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:46.053755    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:46.056953    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:46.554733    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:46.554759    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:46.554770    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:46.554776    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:46.557881    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:46.557956    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:47.053088    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:47.053114    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:47.053139    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:47.053178    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:47.057150    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:47.553469    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:47.553491    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:47.553503    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:47.553509    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:47.556795    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:48.053927    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:48.053949    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:48.053961    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:48.053967    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:48.057833    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:48.554794    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:48.554819    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:48.554829    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:48.554836    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:48.558066    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:48.558139    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:49.053347    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:49.053369    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:49.053380    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:49.053385    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:49.056191    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:49.552995    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:49.553017    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:49.553028    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:49.553035    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:49.556705    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:50.052811    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:50.052836    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:50.052848    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:50.052857    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:50.056125    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:50.553318    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:50.553336    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:50.553343    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:50.553348    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:50.555815    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:51.054852    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:51.054879    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:51.054922    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:51.054929    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:51.058448    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:51.058549    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:51.554735    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:51.554757    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:51.554769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:51.554777    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:51.558250    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:52.053837    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:52.053859    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:52.053871    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:52.053878    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:52.057090    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:52.553164    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:52.553185    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:52.553196    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:52.553203    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:52.556093    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:53.052774    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:53.052789    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:53.052796    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:53.052802    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:53.054809    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:53.553273    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:53.553289    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:53.553296    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:53.553300    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:53.555457    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:53.555522    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:54.054101    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:54.054116    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:54.054126    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:54.054130    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:54.056415    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:54.554015    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:54.554035    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:54.554045    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:54.554052    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:54.557294    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:55.053376    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:55.053396    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:55.053407    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:55.053412    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:55.056562    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:55.553034    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:55.553047    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:55.553054    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:55.553057    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:55.555385    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:56.053965    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:56.053990    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:56.054002    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:56.054007    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:56.057002    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:56.057072    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:56.554082    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:56.554107    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:56.554118    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:56.554125    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:56.557276    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:57.053741    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:57.053768    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:57.053780    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:57.053786    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:57.057162    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:57.554395    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:57.554421    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:57.554433    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:57.554440    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:57.557885    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:58.052984    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:58.052998    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:58.053006    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:58.053010    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:58.055164    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:50:58.553222    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:58.553241    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:58.553271    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:58.553276    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:58.555082    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:50:58.555137    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:50:59.054358    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:59.054380    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:59.054392    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:59.054413    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:59.058040    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:50:59.553380    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:50:59.553408    6731 round_trippers.go:469] Request Headers:
	I0819 10:50:59.553419    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:50:59.553425    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:50:59.556014    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:00.053290    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:00.053308    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:00.053344    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:00.053349    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:00.055796    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:00.553346    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:00.553373    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:00.553384    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:00.553391    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:00.556794    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:00.556903    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:01.053146    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:01.053172    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:01.053215    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:01.053225    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:01.055877    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:01.553221    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:01.553247    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:01.553258    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:01.553265    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:01.556552    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:02.055126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:02.055160    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:02.055175    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:02.055184    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:02.058471    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:02.553937    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:02.553960    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:02.553970    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:02.553975    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:02.557401    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:02.557478    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:03.053784    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:03.053806    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:03.053857    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:03.053867    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:03.056959    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:03.553699    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:03.553755    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:03.553769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:03.553777    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:03.556657    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:04.055276    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:04.055300    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:04.055312    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:04.055319    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:04.058607    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:04.553743    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:04.553769    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:04.553780    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:04.553784    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:04.557143    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:05.054407    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:05.054427    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:05.054439    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:05.054452    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:05.057462    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:05.057531    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:05.554464    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:05.554485    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:05.554497    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:05.554502    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:05.557990    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:06.053104    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:06.053129    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:06.053141    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:06.053150    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:06.055868    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:06.553581    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:06.553600    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:06.553612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:06.553620    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:06.556556    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:07.053664    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:07.053686    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:07.053698    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:07.053708    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:07.057073    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:07.553166    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:07.553191    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:07.553203    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:07.553210    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:07.556450    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:07.556521    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:08.053159    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:08.053174    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:08.053183    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:08.053188    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:08.055328    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:08.553866    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:08.553892    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:08.553904    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:08.553912    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:08.556775    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:09.054290    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:09.054339    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:09.054352    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:09.054358    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:09.057196    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:09.554985    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:09.555010    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:09.555022    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:09.555027    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:09.558086    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:09.558151    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:10.054595    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:10.054620    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:10.054630    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:10.054636    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:10.057941    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:10.555296    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:10.555323    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:10.555373    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:10.555381    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:10.558254    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:11.054279    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:11.054304    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:11.054314    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:11.054320    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:11.057361    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:11.554127    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:11.554148    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:11.554159    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:11.554164    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:11.557132    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:12.053339    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:12.053363    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:12.053380    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:12.053386    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:12.055874    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:12.055948    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:12.555345    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:12.555364    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:12.555375    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:12.555384    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:12.558576    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:13.054454    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:13.054474    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:13.054485    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:13.054491    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:13.057567    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:13.553571    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:13.553591    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:13.553601    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:13.553606    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:13.556946    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:14.055315    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:14.055337    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:14.055348    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:14.055354    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:14.058746    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:14.058822    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:14.554232    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:14.554256    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:14.554267    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:14.554273    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:14.557669    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:15.054617    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:15.054652    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:15.054662    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:15.054668    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:15.057043    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:15.554967    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:15.554988    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:15.555000    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:15.555005    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:15.557951    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:16.054869    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:16.054894    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:16.054934    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:16.054942    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:16.057848    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:16.553740    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:16.553764    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:16.553803    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:16.553811    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:16.556855    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:16.556925    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:17.054370    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:17.054396    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:17.054407    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:17.054415    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:17.057649    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:17.554197    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:17.554250    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:17.554263    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:17.554272    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:17.556745    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:18.053431    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:18.053450    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:18.053461    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:18.053466    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:18.057060    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:18.554353    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:18.554367    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:18.554375    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:18.554381    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:18.556869    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:19.055419    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:19.055442    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:19.055458    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:19.055463    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:19.058903    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:19.059063    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:19.554833    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:19.554848    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:19.554854    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:19.554858    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:19.556762    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:20.054915    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:20.054936    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:20.054947    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:20.054953    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:20.057947    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:20.553863    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:20.553887    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:20.553899    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:20.553906    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:20.557142    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:21.055333    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:21.055359    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:21.055370    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:21.055376    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:21.058593    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:21.554854    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:21.554874    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:21.554885    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:21.554893    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:21.557756    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:21.557904    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:22.055272    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:22.055298    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:22.055309    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:22.055320    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:22.058761    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:22.554889    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:22.554913    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:22.554957    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:22.554966    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:22.557884    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:23.053593    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:23.053677    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:23.053684    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:23.053690    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:23.055671    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:23.554897    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:23.554915    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:23.554921    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:23.554925    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:23.556865    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:24.055573    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:24.055600    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:24.055612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:24.055621    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:24.058999    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:24.059072    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:24.554103    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:24.554125    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:24.554136    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:24.554143    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:24.557593    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:25.055623    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:25.055650    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:25.055661    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:25.055666    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:25.058974    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:25.554496    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:25.554516    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:25.554528    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:25.554533    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:25.557257    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:26.054612    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:26.054675    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:26.054682    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:26.054689    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:26.056656    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:26.554520    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:26.554539    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:26.554548    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:26.554552    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:26.556903    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:26.556961    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:27.055130    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:27.055156    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:27.055167    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:27.055175    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:27.058320    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:27.554836    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:27.554863    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:27.554872    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:27.554880    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:27.558351    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:28.055628    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:28.055651    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:28.055665    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:28.055671    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:28.058655    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:28.554813    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:28.554839    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:28.554852    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:28.554858    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:28.558122    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:28.558200    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:29.054994    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:29.055021    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:29.055062    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:29.055069    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:29.058014    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:29.554426    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:29.554442    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:29.554451    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:29.554455    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:29.556542    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:30.054152    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:30.054172    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:30.054182    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:30.054188    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:30.056862    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:30.554508    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:30.554519    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:30.554526    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:30.554529    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:30.556491    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:31.054836    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:31.054858    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:31.054869    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:31.054876    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:31.057795    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:31.057884    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:31.554037    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:31.554063    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:31.554075    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:31.554084    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:31.559945    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:51:32.054494    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:32.054513    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:32.054522    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:32.054525    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:32.056953    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:32.554097    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:32.554118    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:32.554130    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:32.554137    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:32.558190    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:51:33.054128    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:33.054153    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:33.054164    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:33.054170    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:33.056763    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:33.553714    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:33.553752    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:33.553760    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:33.553764    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:33.556405    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:33.556457    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:34.054545    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:34.054569    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:34.054617    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:34.054624    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:34.057511    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:34.554849    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:34.554871    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:34.554883    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:34.554888    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:34.558363    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:35.053988    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:35.054013    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:35.054024    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:35.054031    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:35.056770    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:35.554587    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:35.554609    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:35.554619    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:35.554625    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:35.557960    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:35.558034    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:36.054198    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:36.054222    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:36.054229    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:36.054232    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:36.055802    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:36.554404    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:36.554428    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:36.554440    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:36.554446    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:36.557090    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:37.054425    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:37.054479    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:37.054490    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:37.054498    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:37.057228    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:37.555500    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:37.555512    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:37.555518    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:37.555521    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:37.557601    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:38.053768    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:38.053782    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:38.053791    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:38.053795    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:38.056165    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:38.056257    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:38.554665    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:38.554676    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:38.554682    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:38.554685    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:38.557419    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:39.054356    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:39.054378    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:39.054389    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:39.054395    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:39.057852    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:39.554782    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:39.554836    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:39.554844    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:39.554848    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:39.557248    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:40.054272    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:40.054293    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:40.054304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:40.054310    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:40.056976    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:40.057062    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:40.555343    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:40.555383    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:40.555394    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:40.555400    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:40.557223    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:41.054729    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:41.054786    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:41.054799    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:41.054806    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:41.057633    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:41.554501    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:41.554567    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:41.554582    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:41.554591    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:41.557830    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:42.054529    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:42.054554    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:42.054563    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:42.054568    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:42.057815    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:42.057887    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:42.555358    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:42.555370    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:42.555377    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:42.555381    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:42.557069    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:43.055502    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:43.055544    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:43.055552    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:43.055560    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:43.057767    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:43.554618    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:43.554638    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:43.554685    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:43.554690    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:43.557317    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:44.054601    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:44.054620    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:44.054626    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:44.054630    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:44.056993    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:44.554782    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:44.554797    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:44.554806    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:44.554810    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:44.556419    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:44.556476    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:45.054525    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:45.054559    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:45.054596    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:45.054633    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:45.058027    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:45.554369    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:45.554385    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:45.554393    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:45.554397    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:45.556944    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:46.054888    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:46.054906    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:46.054915    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:46.054919    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:46.057107    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:46.554088    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:46.554113    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:46.554124    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:46.554130    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:46.557394    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:46.557468    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:47.054175    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:47.054197    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:47.054209    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:47.054217    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:47.057370    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:47.555569    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:47.555594    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:47.555647    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:47.555655    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:47.559047    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:48.055273    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:48.055289    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:48.055300    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:48.055311    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:48.057338    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:48.554690    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:48.554708    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:48.554718    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:48.554724    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:48.557402    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:49.054179    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:49.054233    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:49.054246    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:49.054253    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:49.056979    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:49.057112    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:49.555596    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:49.555619    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:49.555629    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:49.555633    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:49.558319    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:50.054126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:50.054150    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:50.054161    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:50.054168    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:50.057661    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:50.555084    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:50.555110    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:50.555124    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:50.555133    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:50.558415    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:51.054816    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:51.054839    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:51.054854    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:51.054860    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:51.058330    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:51.058413    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:51.554613    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:51.554634    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:51.554645    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:51.554652    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:51.557804    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:52.054564    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:52.054619    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:52.054632    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:52.054638    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:52.057826    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:52.555343    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:52.555366    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:52.555378    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:52.555385    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:52.558107    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:53.055011    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:53.055025    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:53.055034    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:53.055037    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:53.057184    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:53.555329    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:53.555354    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:53.555366    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:53.555372    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:53.558170    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:53.558239    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:54.054793    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:54.054810    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:54.054818    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:54.054823    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:54.057650    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:54.556214    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:54.556241    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:54.556284    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:54.556295    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:54.559721    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:55.054592    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:55.054612    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:55.054624    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:55.054630    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:55.057530    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:55.554855    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:55.554874    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:55.554882    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:55.554886    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:55.557320    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:56.055331    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:56.055352    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:56.055361    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:56.055365    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:56.058215    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:56.058278    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:56.554547    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:56.554568    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:56.554579    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:56.554584    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:56.556705    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:57.054552    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:57.054565    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:57.054570    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:57.054572    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:57.056500    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:51:57.555559    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:57.555585    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:57.555626    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:57.555635    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:57.558863    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:58.054689    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:58.054707    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:58.054737    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:58.054742    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:58.057151    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:51:58.556315    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:58.556341    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:58.556352    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:58.556365    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:58.559715    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:58.559793    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:51:59.055113    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:59.055174    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:59.055189    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:59.055197    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:59.058730    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:51:59.555567    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:51:59.555594    6731 round_trippers.go:469] Request Headers:
	I0819 10:51:59.555607    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:51:59.555612    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:51:59.558994    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:00.055486    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:00.055514    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:00.055526    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:00.055533    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:00.058720    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:00.555382    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:00.555401    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:00.555412    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:00.555418    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:00.558653    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:01.055751    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:01.055778    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:01.055790    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:01.055797    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:01.058484    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:01.058546    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:01.556276    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:01.556294    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:01.556304    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:01.556307    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:01.558623    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:02.054896    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:02.054920    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:02.054973    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:02.054980    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:02.057416    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:02.554490    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:02.554516    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:02.554557    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:02.554568    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:02.557605    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:03.054883    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:03.054898    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:03.054907    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:03.054913    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:03.057408    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:03.554821    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:03.554844    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:03.554856    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:03.554862    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:03.557821    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:03.557893    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:04.054425    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:04.054474    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:04.054486    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:04.054493    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:04.057361    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:04.555269    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:04.555292    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:04.555303    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:04.555310    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:04.557975    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:05.055439    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:05.055462    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:05.055474    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:05.055480    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:05.058438    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:05.555041    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:05.555066    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:05.555110    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:05.555119    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:05.558183    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:05.558255    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:06.054744    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:06.054767    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:06.054780    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:06.054786    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:06.057960    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:06.554522    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:06.554548    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:06.554560    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:06.554568    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:06.557313    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:07.055173    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:07.055199    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:07.055239    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:07.055247    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:07.058653    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:07.555300    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:07.555317    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:07.555328    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:07.555333    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:07.558041    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:08.055354    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:08.055368    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:08.055376    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:08.055379    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:08.057374    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:08.057433    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:08.555236    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:08.555259    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:08.555270    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:08.555277    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:08.558651    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:09.055614    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:09.055640    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:09.055650    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:09.055683    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:09.058939    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:09.556607    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:09.556630    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:09.556641    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:09.556646    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:09.559951    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:10.056557    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:10.056584    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:10.056595    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:10.056603    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:10.060049    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:10.060123    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:10.555721    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:10.555747    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:10.555758    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:10.555766    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:10.559208    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:11.054718    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:11.054745    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:11.054757    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:11.054765    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:11.058258    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:11.554755    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:11.554775    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:11.554787    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:11.554792    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:11.557852    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:12.054659    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:12.054685    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:12.054725    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:12.054736    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:12.057557    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:12.555786    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:12.555805    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:12.555816    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:12.555825    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:12.558720    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:12.558790    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:13.054520    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:13.054531    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:13.054537    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:13.054541    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:13.056746    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:13.555035    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:13.555056    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:13.555069    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:13.555076    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:13.558241    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:14.055844    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:14.055904    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:14.055918    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:14.055926    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:14.059251    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:14.556682    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:14.556705    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:14.556718    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:14.556724    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:14.560091    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:14.560167    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:15.055321    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:15.055341    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:15.055353    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:15.055358    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:15.058575    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:15.554664    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:15.554684    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:15.554698    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:15.554706    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:15.557939    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:16.055206    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:16.055227    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:16.055238    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:16.055246    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:16.058598    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:16.555194    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:16.555214    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:16.555226    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:16.555232    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:16.558383    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:17.056686    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:17.056714    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:17.056726    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:17.056731    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:17.060029    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:17.060100    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:17.556714    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:17.556740    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:17.556750    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:17.556755    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:17.560141    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:18.054996    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:18.055011    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:18.055019    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:18.055025    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:18.057822    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:18.555828    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:18.555841    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:18.555849    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:18.555854    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:18.558383    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:19.055041    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:19.055065    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:19.055077    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:19.055085    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:19.058023    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:19.555151    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:19.555177    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:19.555188    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:19.555193    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:19.558408    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:19.558484    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:20.055165    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:20.055192    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:20.055253    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:20.055266    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:20.058241    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:20.555361    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:20.555384    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:20.555396    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:20.555404    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:20.558504    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:21.056388    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:21.056411    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:21.056424    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:21.056429    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:21.059536    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:21.554779    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:21.554793    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:21.554802    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:21.554805    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:21.557366    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:22.055736    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:22.055758    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:22.055769    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:22.055776    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:22.058591    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:22.058661    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:22.555812    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:22.555836    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:22.555847    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:22.555854    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:22.558948    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:23.056853    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:23.056919    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:23.056944    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:23.056953    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:23.062337    6731 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 10:52:23.554982    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:23.555000    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:23.555011    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:23.555018    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:23.557644    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:24.054899    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:24.054938    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:24.054947    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:24.054953    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:24.057729    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:24.556586    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:24.556600    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:24.556623    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:24.556627    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:24.558638    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:24.558692    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:25.056076    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:25.056096    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:25.056107    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:25.056114    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:25.058803    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:25.556269    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:25.556291    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:25.556303    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:25.556309    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:25.559377    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:26.055956    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:26.055982    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:26.055993    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:26.056000    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:26.059192    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:26.556280    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:26.556302    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:26.556313    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:26.556321    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:26.559053    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:26.559129    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:27.055476    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:27.055501    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:27.055512    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:27.055518    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:27.059048    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:27.554857    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:27.554875    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:27.554889    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:27.554899    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:27.557516    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:28.056934    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:28.056960    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:28.056970    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:28.056977    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:28.061498    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:52:28.556243    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:28.556264    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:28.556274    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:28.556280    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:28.560054    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:28.560129    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:29.056620    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:29.056646    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:29.056690    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:29.056714    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:29.060206    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:29.555385    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:29.555411    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:29.555422    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:29.555429    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:29.558512    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:30.055471    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:30.055493    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:30.055506    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:30.055514    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:30.058459    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:30.555484    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:30.555504    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:30.555516    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:30.555524    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:30.558311    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:31.054968    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:31.055015    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:31.055027    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:31.055032    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:31.057916    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:31.058060    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:31.556014    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:31.556033    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:31.556044    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:31.556050    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:31.559609    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:32.056534    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:32.056581    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:32.056591    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:32.056597    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:32.059302    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:32.555775    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:32.555794    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:32.555806    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:32.555814    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:32.558491    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:33.057040    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:33.057067    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:33.057077    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:33.057085    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:33.060635    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:33.060713    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:33.555570    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:33.555591    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:33.555602    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:33.555608    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:33.559425    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:34.057120    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:34.057141    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:34.057148    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:34.057153    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:34.060018    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:34.555126    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:34.555138    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:34.555146    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:34.555150    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:34.557094    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:35.055444    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:35.055467    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:35.055479    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:35.055486    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:35.058594    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:35.555149    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:35.555197    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:35.555209    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:35.555218    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:35.558115    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:35.558186    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:36.056849    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:36.056876    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:36.056920    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:36.056932    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:36.060766    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:36.555499    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:36.555519    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:36.555528    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:36.555532    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:36.558358    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:37.055144    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:37.055195    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:37.055208    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:37.055215    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:37.058216    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:37.555944    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:37.556001    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:37.556013    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:37.556023    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:37.559260    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:37.559332    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:38.055318    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:38.055338    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:38.055350    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:38.055355    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:38.058181    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:38.555299    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:38.555317    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:38.555329    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:38.555337    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:38.558216    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:39.056988    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:39.057016    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:39.057073    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:39.057083    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:39.060253    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:39.555159    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:39.555181    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:39.555193    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:39.555200    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:39.558336    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:40.055085    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:40.055100    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:40.055105    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:40.055108    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:40.057225    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:40.057326    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:40.556336    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:40.556362    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:40.556374    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:40.556380    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:40.559611    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:41.056619    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:41.056644    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:41.056655    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:41.056661    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:41.060851    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:52:41.555283    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:41.555295    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:41.555302    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:41.555305    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:41.556982    6731 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0819 10:52:42.056943    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:42.056967    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:42.056978    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:42.056985    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:42.060100    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:42.060167    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:42.556338    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:42.556357    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:42.556367    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:42.556377    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:42.559414    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:43.055551    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:43.055573    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:43.055586    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:43.055594    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:43.058624    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:43.555249    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:43.555259    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:43.555264    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:43.555266    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:43.557514    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:44.057256    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:44.057279    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:44.057320    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:44.057332    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:44.060185    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:44.060336    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:44.555282    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:44.555310    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:44.555349    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:44.555359    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:44.557869    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:45.055728    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:45.055742    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:45.055751    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:45.055756    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:45.058016    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:45.556887    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:45.556939    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:45.556953    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:45.556961    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:45.560018    6731 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 10:52:46.055302    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:46.055315    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:46.055321    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:46.055324    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:46.059667    6731 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0819 10:52:46.555661    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:46.555681    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:46.555693    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:46.555699    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:46.558535    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:46.558625    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:47.055328    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:47.055352    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:47.055364    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:47.055370    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:47.062725    6731 round_trippers.go:574] Response Status: 404 Not Found in 7 milliseconds
	I0819 10:52:47.555663    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:47.555688    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:47.555699    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:47.555706    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:47.557822    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:48.056671    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:48.056687    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:48.056695    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:48.056700    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:48.059006    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:48.555409    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:48.555429    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:48.555441    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:48.555450    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:48.557941    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:49.057092    6731 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-431000-m03
	I0819 10:52:49.057119    6731 round_trippers.go:469] Request Headers:
	I0819 10:52:49.057131    6731 round_trippers.go:473]     Accept: application/json, */*
	I0819 10:52:49.057137    6731 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0819 10:52:49.060065    6731 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 10:52:49.060130    6731 node_ready.go:53] error getting node "ha-431000-m03": nodes "ha-431000-m03" not found
	I0819 10:52:49.060145    6731 node_ready.go:38] duration metric: took 4m0.005002355s for node "ha-431000-m03" to be "Ready" ...
	I0819 10:52:49.082024    6731 out.go:201] 
	W0819 10:52:49.103661    6731 out.go:270] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0819 10:52:49.103680    6731 out.go:270] * 
	W0819 10:52:49.104908    6731 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 10:52:49.166900    6731 out.go:201] 
	
	
	==> Docker <==
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.660449818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.667060942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.667102169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.667230179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 cri-dockerd[1452]: time="2024-08-19T17:48:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bb2d3a2636faf0cfb532ba0f74d5469305e3758ab39cbdf9fa28f8ef5ebf4c3d/resolv.conf as [nameserver 192.169.0.1]"
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.701236024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.701309443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.701321973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.701403920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.820778563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.820826586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.820837953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.820905001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.876030412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.876130553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.876143392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:23 ha-431000 dockerd[1203]: time="2024-08-19T17:48:23.876235719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:48:54 ha-431000 dockerd[1203]: time="2024-08-19T17:48:54.187251071Z" level=info msg="shim disconnected" id=a84c42391a84af02fac8bc4d031f949d77c9b2ceebf766d7c6c36a32ac6a9c95 namespace=moby
	Aug 19 17:48:54 ha-431000 dockerd[1197]: time="2024-08-19T17:48:54.187571465Z" level=info msg="ignoring event" container=a84c42391a84af02fac8bc4d031f949d77c9b2ceebf766d7c6c36a32ac6a9c95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:48:54 ha-431000 dockerd[1203]: time="2024-08-19T17:48:54.187882726Z" level=warning msg="cleaning up after shim disconnected" id=a84c42391a84af02fac8bc4d031f949d77c9b2ceebf766d7c6c36a32ac6a9c95 namespace=moby
	Aug 19 17:48:54 ha-431000 dockerd[1203]: time="2024-08-19T17:48:54.187960780Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:49:06 ha-431000 dockerd[1203]: time="2024-08-19T17:49:06.688629405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:49:06 ha-431000 dockerd[1203]: time="2024-08-19T17:49:06.688666721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:49:06 ha-431000 dockerd[1203]: time="2024-08-19T17:49:06.688675306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:49:06 ha-431000 dockerd[1203]: time="2024-08-19T17:49:06.688795214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bcf3cd19406a4       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       3                   19da8eae0d48a       storage-provisioner
	414908be37c88       8c811b4aec35f                                                                                         6 minutes ago       Running             busybox                   1                   fd28a05caf8d7       busybox-7dff88458-x7m6m
	51e18fb0428a6       12968670680f4                                                                                         6 minutes ago       Running             kindnet-cni               1                   bb2d3a2636faf       kindnet-lvdbg
	d7843c76d3e01       cbb01a7bd410d                                                                                         6 minutes ago       Running             coredns                   1                   ca4ec932efa63       coredns-6f6b679f8f-vc76p
	a84c42391a84a       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       2                   19da8eae0d48a       storage-provisioner
	29764bad0bc90       cbb01a7bd410d                                                                                         6 minutes ago       Running             coredns                   1                   1d64ea8ea4f81       coredns-6f6b679f8f-hr2qx
	5636b94096fee       ad83b2ca7b09e                                                                                         6 minutes ago       Running             kube-proxy                1                   5627589c9455b       kube-proxy-5l56s
	f4bd8ba2e0437       045733566833c                                                                                         6 minutes ago       Running             kube-controller-manager   2                   1a643a0353bfb       kube-controller-manager-ha-431000
	11f4d59b4fb1d       38af8ddebf499                                                                                         6 minutes ago       Running             kube-vip                  0                   43fb644937b95       kube-vip-ha-431000
	dea4f29e78603       1766f54c897f0                                                                                         6 minutes ago       Running             kube-scheduler            1                   9e839ed84518f       kube-scheduler-ha-431000
	4ed272951c848       045733566833c                                                                                         6 minutes ago       Exited              kube-controller-manager   1                   1a643a0353bfb       kube-controller-manager-ha-431000
	a003b845ec488       604f5db92eaa8                                                                                         6 minutes ago       Running             kube-apiserver            3                   545d8a82cc659       kube-apiserver-ha-431000
	1bac9a6bc6836       2e96e5913fc06                                                                                         6 minutes ago       Running             etcd                      1                   c143d60007e3b       etcd-ha-431000
	4c18dbcc00045       604f5db92eaa8                                                                                         8 minutes ago       Exited              kube-apiserver            2                   5a0fe916eaf1d       kube-apiserver-ha-431000
	da6e4a61b6cf8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   24 minutes ago      Exited              busybox                   0                   6d38fc70c811c       busybox-7dff88458-x7m6m
	b9d1bccf00c94       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   74fd2f09b011a       coredns-6f6b679f8f-hr2qx
	a3891ab602da5       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   c3745c7f8fb9f       coredns-6f6b679f8f-vc76p
	37cd2e9ed2f34       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              26 minutes ago      Exited              kindnet-cni               0                   568b6f1ff9aaf       kindnet-lvdbg
	889ab608901bb       ad83b2ca7b09e                                                                                         26 minutes ago      Exited              kube-proxy                0                   fde7b27c3d1a5       kube-proxy-5l56s
	11d9cd3b2f49f       1766f54c897f0                                                                                         26 minutes ago      Exited              kube-scheduler            0                   4c252909f338f       kube-scheduler-ha-431000
	39fe08877284d       2e96e5913fc06                                                                                         26 minutes ago      Exited              etcd                      0                   fc30d54d1b565       etcd-ha-431000
	
	
	==> coredns [29764bad0bc9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57280 - 39922 "HINFO IN 6598223870971274302.2706221343910350861. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01011612s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[281575694]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.217) (total time: 30003ms):
	Trace[281575694]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:48:54.221)
	Trace[281575694]: [30.003763494s] [30.003763494s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1147384648]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.218) (total time: 30003ms):
	Trace[1147384648]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:48:54.221)
	Trace[1147384648]: [30.003739495s] [30.003739495s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[953244717]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.220) (total time: 30001ms):
	Trace[953244717]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:48:54.221)
	Trace[953244717]: [30.001122159s] [30.001122159s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [a3891ab602da] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[384323591]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.607) (total time: 12726ms):
	Trace[384323591]: ---"Objects listed" error:Unauthorized 12726ms (17:45:24.333)
	Trace[384323591]: [12.726289493s] [12.726289493s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[183169271]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.561) (total time: 12772ms):
	Trace[183169271]: ---"Objects listed" error:Unauthorized 12772ms (17:45:24.334)
	Trace[183169271]: [12.77286543s] [12.77286543s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[321930627]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.615) (total time: 12720ms):
	Trace[321930627]: ---"Objects listed" error:Unauthorized 12719ms (17:45:24.334)
	Trace[321930627]: [12.72052183s] [12.72052183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b9d1bccf00c9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[593417891]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.204) (total time: 13131ms):
	Trace[593417891]: ---"Objects listed" error:Unauthorized 13130ms (17:45:24.335)
	Trace[593417891]: [13.131401942s] [13.131401942s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1133648867]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.419) (total time: 12917ms):
	Trace[1133648867]: ---"Objects listed" error:Unauthorized 12916ms (17:45:24.335)
	Trace[1133648867]: [12.917404362s] [12.917404362s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1960632058]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:45:11.301) (total time: 13035ms):
	Trace[1960632058]: ---"Objects listed" error:Unauthorized 13034ms (17:45:24.335)
	Trace[1960632058]: [13.035512102s] [13.035512102s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d7843c76d3e0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52034 - 20734 "HINFO IN 58890247287997822.7011696019754483361. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.010598723s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[901481756]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.217) (total time: 30003ms):
	Trace[901481756]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:48:54.220)
	Trace[901481756]: [30.003857838s] [30.003857838s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1030491669]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.220) (total time: 30001ms):
	Trace[1030491669]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:48:54.221)
	Trace[1030491669]: [30.001096527s] [30.001096527s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1524033155]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.217) (total time: 30003ms):
	Trace[1524033155]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:48:54.220)
	Trace[1524033155]: [30.003971024s] [30.003971024s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T10_27_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:27:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:54:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:53:17 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:53:17 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:53:17 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:53:17 +0000   Mon, 19 Aug 2024 17:46:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 091fd90bc5e54c778c79f60719f28fee
	  System UUID:                7f844fbb-0000-0000-b5d6-699bdfe1640c
	  Boot ID:                    d77cc3ba-25a4-4e2f-b353-1894538ac2ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x7m6m              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 coredns-6f6b679f8f-hr2qx             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 coredns-6f6b679f8f-vc76p             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 etcd-ha-431000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         26m
	  kube-system                 kindnet-lvdbg                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-ha-431000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-controller-manager-ha-431000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-5l56s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-ha-431000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-vip-ha-431000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m8s                   kube-proxy       
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  26m (x8 over 26m)      kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     26m (x7 over 26m)      kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    26m (x8 over 26m)      kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 26m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 26m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                    node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  RegisteredNode           25m                    node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  RegisteredNode           8m52s                  node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeNotReady             8m49s                  node-controller  Node ha-431000 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  8m15s (x2 over 26m)    kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s (x2 over 26m)    kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s (x2 over 26m)    kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m15s (x2 over 26m)    kubelet          Node ha-431000 status is now: NodeReady
	  Normal  Starting                 6m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m59s (x8 over 6m59s)  kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m59s (x8 over 6m59s)  kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m59s (x7 over 6m59s)  kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m27s                  node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	
	
	Name:               ha-431000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_28_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:28:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:54:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:53:08 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:53:08 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:53:08 +0000   Mon, 19 Aug 2024 17:28:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:53:08 +0000   Mon, 19 Aug 2024 17:48:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-431000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 f78ea9d3ce4f4999bd0f517107045dac
	  System UUID:                decf4e23-0000-0000-95db-084dbcc69753
	  Boot ID:                    30b31def-c649-4af2-9bf8-357051f66687
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2l9lq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 etcd-ha-431000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         25m
	  kube-system                 kindnet-qmgqd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	  kube-system                 kube-apiserver-ha-431000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-ha-431000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-5h7j2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-ha-431000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-vip-ha-431000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 25m                    kube-proxy       
	  Normal  Starting                 6m14s                  kube-proxy       
	  Normal  Starting                 8m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  25m (x8 over 25m)      kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m (x8 over 25m)      kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m (x7 over 25m)      kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           25m                    node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  RegisteredNode           25m                    node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  NodeAllocatableEnforced  9m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    9m4s (x8 over 9m5s)    kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m4s (x7 over 9m5s)    kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m4s (x8 over 9m5s)    kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m52s                  node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  Starting                 6m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m40s (x8 over 6m40s)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x8 over 6m40s)  kubelet          Node ha-431000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x7 over 6m40s)  kubelet          Node ha-431000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m27s                  node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-431000-m02 event: Registered Node ha-431000-m02 in Controller
	
	
	Name:               ha-431000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T10_42_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:42:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:46:31 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 17:46:03 +0000   Mon, 19 Aug 2024 17:48:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-431000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e639484a1c98402fa6d9e2bb5fe71e03
	  System UUID:                c32a4140-0000-0000-838a-ef53ae6c724a
	  Boot ID:                    65e77bd5-3b1f-49d0-a224-e0cd2d7b346a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wfcpq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kindnet-kcrzx              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-2fn5w           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                  node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  NodeNotReady             8m55s                node-controller  Node ha-431000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           8m52s                node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  NodeHasSufficientPID     8m29s (x3 over 12m)  kubelet          Node ha-431000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m29s (x3 over 12m)  kubelet          Node ha-431000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m29s (x3 over 12m)  kubelet          Node ha-431000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                8m29s (x2 over 11m)  kubelet          Node ha-431000-m04 status is now: NodeReady
	  Normal  RegisteredNode           6m27s                node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  RegisteredNode           6m10s                node-controller  Node ha-431000-m04 event: Registered Node ha-431000-m04 in Controller
	  Normal  NodeNotReady             5m47s                node-controller  Node ha-431000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.037609] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007731] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.940253] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.008173] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.748288] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.215588] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.459478] systemd-fstab-generator[477]: Ignoring "noauto" option for root device
	[  +0.103771] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +1.265239] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.679773] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.260409] systemd-fstab-generator[1163]: Ignoring "noauto" option for root device
	[  +0.102915] systemd-fstab-generator[1175]: Ignoring "noauto" option for root device
	[  +0.106928] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +2.452535] systemd-fstab-generator[1404]: Ignoring "noauto" option for root device
	[  +0.107987] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.113394] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +0.130493] systemd-fstab-generator[1444]: Ignoring "noauto" option for root device
	[  +0.427524] systemd-fstab-generator[1606]: Ignoring "noauto" option for root device
	[  +6.862500] kauditd_printk_skb: 271 callbacks suppressed
	[Aug19 17:48] kauditd_printk_skb: 40 callbacks suppressed
	[ +24.233269] kauditd_printk_skb: 85 callbacks suppressed
	
	
	==> etcd [1bac9a6bc683] <==
	{"level":"info","ts":"2024-08-19T17:48:00.377374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 4"}
	{"level":"info","ts":"2024-08-19T17:48:00.377381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-08-19T17:48:00.377390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 7639] sent MsgVote request to c22c1f54a3cc7858 at term 4"}
	{"level":"info","ts":"2024-08-19T17:48:00.378026Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T17:48:00.378094Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:48:00.409374Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c22c1f54a3cc7858","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T17:48:00.409450Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:48:00.432257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from c22c1f54a3cc7858 at term 4"}
	{"level":"info","ts":"2024-08-19T17:48:00.432302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-08-19T17:48:00.432315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 4"}
	{"level":"info","ts":"2024-08-19T17:48:00.432322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 4"}
	{"level":"warn","ts":"2024-08-19T17:48:00.432865Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.704610384s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-08-19T17:48:00.432910Z","caller":"traceutil/trace.go:171","msg":"trace[1009373033] range","detail":"{range_begin:; range_end:; }","duration":"4.705082403s","start":"2024-08-19T17:47:55.727822Z","end":"2024-08-19T17:48:00.432904Z","steps":["trace[1009373033] 'agreement among raft nodes before linearized reading'  (duration: 4.704609685s)"],"step_count":1}
	{"level":"error","ts":"2024-08-19T17:48:00.432938Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: leader changed\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-19T17:48:00.443156Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-431000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T17:48:00.443469Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:48:00.443876Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:48:00.444023Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T17:48:00.444146Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T17:48:00.445056Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:48:00.445743Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-08-19T17:48:00.446239Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:48:00.446924Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-19T17:48:01.085875Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c22c1f54a3cc7858","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-19T17:54:32.746814Z","caller":"traceutil/trace.go:171","msg":"trace[525602257] transaction","detail":"{read_only:false; response_revision:7917; number_of_response:1; }","duration":"112.842415ms","start":"2024-08-19T17:54:32.632225Z","end":"2024-08-19T17:54:32.745067Z","steps":["trace[525602257] 'process raft request'  (duration: 73.888034ms)","trace[525602257] 'compare'  (duration: 38.497299ms)"],"step_count":2}
	
	
	==> etcd [39fe08877284] <==
	{"level":"warn","ts":"2024-08-19T17:47:05.166887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.171370368s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-19T17:47:05.166927Z","caller":"traceutil/trace.go:171","msg":"trace[1410457657] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; }","duration":"3.171412779s","start":"2024-08-19T17:47:01.995509Z","end":"2024-08-19T17:47:05.166922Z","steps":["trace[1410457657] 'agreement among raft nodes before linearized reading'  (duration: 3.171369875s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:05.166949Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:47:01.995503Z","time spent":"3.171439259s","remote":"127.0.0.1:54556","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true "}
	2024/08/19 17:47:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-19T17:47:05.171962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.726994729s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-19T17:47:05.172040Z","caller":"traceutil/trace.go:171","msg":"trace[1113597890] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; }","duration":"6.727085676s","start":"2024-08-19T17:46:58.444946Z","end":"2024-08-19T17:47:05.172032Z","steps":["trace[1113597890] 'agreement among raft nodes before linearized reading'  (duration: 6.726993461s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:47:05.172074Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:46:58.444911Z","time spent":"6.727153442s","remote":"127.0.0.1:54494","response type":"/etcdserverpb.KV/Range","request count":0,"request size":80,"response count":0,"response size":0,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true "}
	2024/08/19 17:47:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-08-19T17:47:05.195528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-19T17:47:05.195597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-19T17:47:05.195611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-19T17:47:05.195621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 7639] sent MsgPreVote request to c22c1f54a3cc7858 at term 3"}
	{"level":"warn","ts":"2024-08-19T17:47:05.231267Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T17:47:05.231399Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T17:47:05.231486Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-19T17:47:05.242251Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242314Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242334Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242429Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242480Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242505Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.242537Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:47:05.254609Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-19T17:47:05.254703Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-19T17:47:05.254731Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-431000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:54:33 up 7 min,  0 users,  load average: 0.46, 0.47, 0.27
	Linux ha-431000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [37cd2e9ed2f3] <==
	I0819 17:46:23.914700       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:33.918534       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:46:33.918663       1 main.go:299] handling current node
	I0819 17:46:33.918861       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:46:33.918971       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:46:33.919255       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:46:33.919335       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:43.920546       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:46:43.920598       1 main.go:299] handling current node
	I0819 17:46:43.920613       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:46:43.920620       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:46:43.920738       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:46:43.920772       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:53.913617       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:46:53.913747       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:46:53.913917       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:46:53.913949       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:46:53.914169       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:46:53.914262       1 main.go:299] handling current node
	I0819 17:47:03.921210       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:47:03.921259       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:47:03.921491       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:47:03.921521       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:47:03.922162       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:47:03.922193       1 main.go:299] handling current node
	
	
	==> kindnet [51e18fb0428a] <==
	I0819 17:53:44.918438       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:53:54.907456       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:53:54.907487       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:53:54.907837       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:53:54.907917       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:53:54.907973       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:53:54.907980       1 main.go:299] handling current node
	I0819 17:54:04.907479       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:54:04.907516       1 main.go:299] handling current node
	I0819 17:54:04.907533       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:54:04.907540       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:54:04.907671       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:54:04.907682       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:54:14.913933       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:54:14.914162       1 main.go:299] handling current node
	I0819 17:54:14.914337       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:54:14.914487       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:54:14.914765       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:54:14.914907       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:54:24.907704       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:54:24.908008       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:54:24.908500       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:54:24.908694       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:54:24.909016       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:54:24.909238       1 main.go:299] handling current node
	
	
	==> kube-apiserver [4c18dbcc0004] <==
	W0819 17:47:06.224404       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224462       1 logging.go:55] [core] [Channel #21 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224512       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224540       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224567       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224707       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224877       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224939       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225060       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225185       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225305       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225473       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225603       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225483       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.223400       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.223780       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224051       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224207       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.224914       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.225824       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.241577       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.241624       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.242647       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.242737       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:47:06.242800       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a003b845ec48] <==
	I0819 17:48:01.313281       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0819 17:48:01.331515       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0819 17:48:01.328782       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0819 17:48:01.331698       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0819 17:48:01.411877       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 17:48:01.413426       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 17:48:01.413779       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 17:48:01.419113       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 17:48:01.429688       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 17:48:01.430281       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0819 17:48:01.430591       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 17:48:01.429696       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 17:48:01.431877       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 17:48:01.432005       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 17:48:01.432301       1 aggregator.go:171] initial CRD sync complete...
	I0819 17:48:01.432436       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 17:48:01.432480       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 17:48:01.432747       1 cache.go:39] Caches are synced for autoregister controller
	I0819 17:48:01.433634       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 17:48:01.446079       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 17:48:01.446288       1 policy_source.go:224] refreshing policies
	I0819 17:48:01.492628       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 17:48:02.319223       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 17:48:25.142240       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 17:49:02.969551       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4ed272951c84] <==
	I0819 17:47:41.490925       1 serving.go:386] Generated self-signed cert in-memory
	I0819 17:47:41.916844       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 17:47:41.916877       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:47:41.919139       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 17:47:41.919369       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 17:47:41.919719       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 17:47:41.919893       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 17:48:01.923605       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [f4bd8ba2e043] <==
	I0819 17:48:25.174863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.786µs"
	I0819 17:48:45.310576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:48:45.327813       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:48:45.329932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.670512ms"
	I0819 17:48:45.330669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.87µs"
	I0819 17:48:48.065252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:48:50.387040       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m04"
	I0819 17:49:02.979492       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qvf7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qvf7h\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 17:49:02.980150       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"acb20e0e-195e-4196-a326-6cecf7b6a85e", APIVersion:"v1", ResourceVersion:"298", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qvf7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qvf7h": the object has been modified; please apply your changes to the latest version and try again
	I0819 17:49:02.996861       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qvf7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qvf7h\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 17:49:02.997253       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"acb20e0e-195e-4196-a326-6cecf7b6a85e", APIVersion:"v1", ResourceVersion:"298", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qvf7h EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qvf7h": the object has been modified; please apply your changes to the latest version and try again
	I0819 17:49:03.001503       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="35.707158ms"
	E0819 17:49:03.002380       1 replica_set.go:560] "Unhandled Error" err="sync \"kube-system/coredns-6f6b679f8f\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-6f6b679f8f\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0819 17:49:03.004337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="138.881µs"
	I0819 17:49:03.009397       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="140.999µs"
	I0819 17:53:08.817305       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000-m02"
	I0819 17:53:17.940428       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-431000"
	I0819 17:53:48.066570       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7dff88458-wfcpq"
	I0819 17:53:48.077320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.66µs"
	I0819 17:53:48.123894       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.853846ms"
	I0819 17:53:48.144694       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.74914ms"
	I0819 17:53:48.151508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.706635ms"
	I0819 17:53:48.151572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.057µs"
	I0819 17:53:48.168820       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.023812ms"
	I0819 17:53:48.169343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.301µs"
	
	
	==> kube-proxy [5636b94096fe] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:48:24.349165       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:48:24.367746       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0819 17:48:24.368041       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:48:24.405399       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:48:24.405456       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:48:24.405475       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:48:24.408447       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:48:24.408968       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:48:24.409000       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:48:24.413438       1 config.go:197] "Starting service config controller"
	I0819 17:48:24.414215       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:48:24.414469       1 config.go:326] "Starting node config controller"
	I0819 17:48:24.414498       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:48:24.415820       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:48:24.415879       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:48:24.514730       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:48:24.514769       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:48:24.516651       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [889ab608901b] <==
	E0819 17:44:04.860226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:11.002885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:11.002930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:23.290432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:23.290751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:23.290543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:23.291205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:26.362595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:26.363019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:41.722266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:41.722341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:41.722406       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:41.722425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:44:54.009699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:44:54.009972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:45:09.369057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:45:09.369337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2442\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:45:30.873553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:45:30.873673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2649\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:45:33.945461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642": dial tcp 192.169.0.254:8443: connect: no route to host
	E0819 17:45:33.945676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&resourceVersion=2642\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [11d9cd3b2f49] <==
	E0819 17:45:08.312166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0819 17:45:09.806525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0819 17:45:10.272292       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	W0819 17:45:25.011877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:45:25.011937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:28.351281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:45:28.351338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:31.008358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 17:45:31.008417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:45:33.186287       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:45:33.186381       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 17:45:36.848394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 17:45:36.848442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0819 17:45:54.148342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.169.0.5:50394->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.169.0.5:50378->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148560       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.169.0.5:50362->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.169.0.5:50356->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.148871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.169.0.5:50346->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.149161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.169.0.5:50400->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.149643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.169.0.5:50358->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 17:45:54.149841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.169.0.5:50398->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	I0819 17:47:05.116640       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0819 17:47:05.132838       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 17:47:05.130413       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0819 17:47:05.147031       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dea4f29e7860] <==
	I0819 17:47:41.723714       1 serving.go:386] Generated self-signed cert in-memory
	W0819 17:47:52.174871       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0819 17:47:52.174919       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 17:47:52.174925       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 17:48:01.357387       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 17:48:01.359330       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:48:01.366155       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 17:48:01.366276       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 17:48:01.366447       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 17:48:01.366799       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 17:48:01.470208       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 17:50:33 ha-431000 kubelet[1613]: E0819 17:50:33.663241    1613 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:50:33 ha-431000 kubelet[1613]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:50:33 ha-431000 kubelet[1613]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:50:33 ha-431000 kubelet[1613]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:50:33 ha-431000 kubelet[1613]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:51:33 ha-431000 kubelet[1613]: E0819 17:51:33.662663    1613 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:51:33 ha-431000 kubelet[1613]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:51:33 ha-431000 kubelet[1613]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:51:33 ha-431000 kubelet[1613]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:51:33 ha-431000 kubelet[1613]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:52:33 ha-431000 kubelet[1613]: E0819 17:52:33.672157    1613 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:52:33 ha-431000 kubelet[1613]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:52:33 ha-431000 kubelet[1613]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:52:33 ha-431000 kubelet[1613]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:52:33 ha-431000 kubelet[1613]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:53:33 ha-431000 kubelet[1613]: E0819 17:53:33.663790    1613 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:53:33 ha-431000 kubelet[1613]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:53:33 ha-431000 kubelet[1613]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:53:33 ha-431000 kubelet[1613]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:53:33 ha-431000 kubelet[1613]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:54:33 ha-431000 kubelet[1613]: E0819 17:54:33.663921    1613 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:54:33 ha-431000 kubelet[1613]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:54:33 ha-431000 kubelet[1613]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:54:33 ha-431000 kubelet[1613]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:54:33 ha-431000 kubelet[1613]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-r8bld
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-431000 describe pod busybox-7dff88458-r8bld
helpers_test.go:282: (dbg) kubectl --context ha-431000 describe pod busybox-7dff88458-r8bld:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-r8bld
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hhnxr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hhnxr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  47s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  46s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (101.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (94.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 stop -v=7 --alsologtostderr
E0819 10:55:29.097540    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:55:43.504749    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 stop -v=7 --alsologtostderr: (1m33.893994493s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 7 (104.871164ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-431000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-431000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-431000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:56:09.212516    6972 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:56:09.213307    6972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:56:09.213315    6972 out.go:358] Setting ErrFile to fd 2...
	I0819 10:56:09.213322    6972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:56:09.213820    6972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:56:09.214037    6972 out.go:352] Setting JSON to false
	I0819 10:56:09.214063    6972 mustload.go:65] Loading cluster: ha-431000
	I0819 10:56:09.214090    6972 notify.go:220] Checking for updates...
	I0819 10:56:09.214354    6972 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:56:09.214369    6972 status.go:255] checking status of ha-431000 ...
	I0819 10:56:09.214697    6972 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.214742    6972 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:09.223890    6972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52237
	I0819 10:56:09.224259    6972 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:09.224668    6972 main.go:141] libmachine: Using API Version  1
	I0819 10:56:09.224689    6972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:09.224902    6972 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:09.225046    6972 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:56:09.225140    6972 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:09.225210    6972 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:56:09.226133    6972 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid 6743 missing from process table
	I0819 10:56:09.226157    6972 status.go:330] ha-431000 host status = "Stopped" (err=<nil>)
	I0819 10:56:09.226168    6972 status.go:343] host is not running, skipping remaining checks
	I0819 10:56:09.226175    6972 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:56:09.226195    6972 status.go:255] checking status of ha-431000-m02 ...
	I0819 10:56:09.226437    6972 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.226457    6972 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:09.234788    6972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52239
	I0819 10:56:09.235099    6972 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:09.235440    6972 main.go:141] libmachine: Using API Version  1
	I0819 10:56:09.235469    6972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:09.235701    6972 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:09.235811    6972 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:56:09.235895    6972 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:09.235964    6972 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6783
	I0819 10:56:09.236885    6972 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 6783 missing from process table
	I0819 10:56:09.236920    6972 status.go:330] ha-431000-m02 host status = "Stopped" (err=<nil>)
	I0819 10:56:09.236927    6972 status.go:343] host is not running, skipping remaining checks
	I0819 10:56:09.236934    6972 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:56:09.236945    6972 status.go:255] checking status of ha-431000-m03 ...
	I0819 10:56:09.237187    6972 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.237213    6972 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:09.247759    6972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52241
	I0819 10:56:09.248096    6972 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:09.248399    6972 main.go:141] libmachine: Using API Version  1
	I0819 10:56:09.248407    6972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:09.248613    6972 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:09.248722    6972 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0819 10:56:09.248815    6972 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:09.248894    6972 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 6801
	I0819 10:56:09.249820    6972 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid 6801 missing from process table
	I0819 10:56:09.249847    6972 status.go:330] ha-431000-m03 host status = "Stopped" (err=<nil>)
	I0819 10:56:09.249854    6972 status.go:343] host is not running, skipping remaining checks
	I0819 10:56:09.249861    6972 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 10:56:09.249871    6972 status.go:255] checking status of ha-431000-m04 ...
	I0819 10:56:09.250118    6972 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.250144    6972 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:09.258661    6972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52243
	I0819 10:56:09.259002    6972 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:09.259354    6972 main.go:141] libmachine: Using API Version  1
	I0819 10:56:09.259372    6972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:09.259574    6972 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:09.259695    6972 main.go:141] libmachine: (ha-431000-m04) Calling .GetState
	I0819 10:56:09.259780    6972 main.go:141] libmachine: (ha-431000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:09.259872    6972 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid from json: 6186
	I0819 10:56:09.260786    6972 main.go:141] libmachine: (ha-431000-m04) DBG | hyperkit pid 6186 missing from process table
	I0819 10:56:09.260796    6972 status.go:330] ha-431000-m04 host status = "Stopped" (err=<nil>)
	I0819 10:56:09.260803    6972 status.go:343] host is not running, skipping remaining checks
	I0819 10:56:09.260810    6972 status.go:257] ha-431000-m04 status: &{Name:ha-431000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr": ha-431000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr": ha-431000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr": ha-431000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000: exit status 7 (68.439207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (94.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (62.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-431000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-431000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : signal: killed (1m0.145622901s)

                                                
                                                
-- stdout --
	* [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	* Restarting existing hyperkit VM for "ha-431000" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	* Enabled addons: 
	
	* Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	* Restarting existing hyperkit VM for "ha-431000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:56:09.387037    6981 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:56:09.387223    6981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:56:09.387229    6981 out.go:358] Setting ErrFile to fd 2...
	I0819 10:56:09.387232    6981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:56:09.387409    6981 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:56:09.388880    6981 out.go:352] Setting JSON to false
	I0819 10:56:09.411239    6981 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5139,"bootTime":1724085030,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:56:09.411338    6981 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:56:09.433409    6981 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:56:09.476100    6981 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:56:09.476156    6981 notify.go:220] Checking for updates...
	I0819 10:56:09.518722    6981 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:56:09.539864    6981 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:56:09.561099    6981 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:56:09.582061    6981 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:56:09.603005    6981 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:56:09.624771    6981 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:56:09.625423    6981 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.625516    6981 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:09.635388    6981 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52249
	I0819 10:56:09.635748    6981 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:09.636177    6981 main.go:141] libmachine: Using API Version  1
	I0819 10:56:09.636189    6981 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:09.636396    6981 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:09.636519    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:09.636718    6981 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:56:09.636945    6981 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.636967    6981 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:09.645612    6981 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52251
	I0819 10:56:09.645982    6981 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:09.646319    6981 main.go:141] libmachine: Using API Version  1
	I0819 10:56:09.646343    6981 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:09.646562    6981 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:09.646665    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:09.675726    6981 out.go:177] * Using the hyperkit driver based on existing profile
	I0819 10:56:09.717938    6981 start.go:297] selected driver: hyperkit
	I0819 10:56:09.717966    6981 start.go:901] validating driver "hyperkit" against &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:56:09.718211    6981 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:56:09.718380    6981 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:56:09.718594    6981 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:56:09.728278    6981 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:56:09.732198    6981 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.732218    6981 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:56:09.734893    6981 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:56:09.734966    6981 cni.go:84] Creating CNI manager for ""
	I0819 10:56:09.734976    6981 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 10:56:09.735058    6981 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:56:09.735153    6981 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:56:09.756949    6981 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:56:09.778034    6981 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:56:09.778106    6981 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:56:09.778135    6981 cache.go:56] Caching tarball of preloaded images
	I0819 10:56:09.778324    6981 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:56:09.778344    6981 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:56:09.778524    6981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:56:09.779449    6981 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:56:09.779569    6981 start.go:364] duration metric: took 95.514µs to acquireMachinesLock for "ha-431000"
	I0819 10:56:09.779608    6981 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:56:09.779625    6981 fix.go:54] fixHost starting: 
	I0819 10:56:09.780035    6981 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.780080    6981 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:09.789228    6981 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52253
	I0819 10:56:09.789570    6981 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:09.789942    6981 main.go:141] libmachine: Using API Version  1
	I0819 10:56:09.789956    6981 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:09.790188    6981 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:09.790310    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:09.790421    6981 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:56:09.790499    6981 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:09.790583    6981 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:56:09.791522    6981 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid 6743 missing from process table
	I0819 10:56:09.791559    6981 fix.go:112] recreateIfNeeded on ha-431000: state=Stopped err=<nil>
	I0819 10:56:09.791574    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	W0819 10:56:09.791672    6981 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:56:09.833730    6981 out.go:177] * Restarting existing hyperkit VM for "ha-431000" ...
	I0819 10:56:09.854892    6981 main.go:141] libmachine: (ha-431000) Calling .Start
	I0819 10:56:09.855160    6981 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:09.855203    6981 main.go:141] libmachine: (ha-431000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:56:09.857060    6981 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid 6743 missing from process table
	I0819 10:56:09.857081    6981 main.go:141] libmachine: (ha-431000) DBG | pid 6743 is in state "Stopped"
	I0819 10:56:09.857096    6981 main.go:141] libmachine: (ha-431000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid...
	I0819 10:56:09.857535    6981 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:56:09.970561    6981 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:56:09.970590    6981 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:56:09.970672    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8c00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:56:09.970699    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8c00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:56:09.970748    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:56:09.970788    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:56:09.970807    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:56:09.972280    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: Pid is 6995
	I0819 10:56:09.972670    6981 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:56:09.972685    6981 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:09.972868    6981 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6995
	I0819 10:56:09.974774    6981 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:56:09.974861    6981 main.go:141] libmachine: (ha-431000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:56:09.974891    6981 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 10:56:09.974908    6981 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d6bf}
	I0819 10:56:09.974929    6981 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d6ab}
	I0819 10:56:09.974944    6981 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:56:09.974967    6981 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:56:09.975005    6981 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:56:09.975805    6981 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:56:09.975993    6981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:56:09.976453    6981 machine.go:93] provisionDockerMachine start ...
	I0819 10:56:09.976463    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:09.976570    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:09.976688    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:09.976807    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:09.976913    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:09.977033    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:09.977172    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:09.977450    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:09.977460    6981 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:56:09.980166    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:56:10.032027    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:56:10.032759    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:56:10.032774    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:56:10.032792    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:56:10.032806    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:56:10.411967    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:56:10.411990    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:56:10.526438    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:56:10.526455    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:56:10.526465    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:56:10.526476    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:56:10.527428    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:56:10.527460    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:56:16.111682    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:56:16.111715    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:56:16.111723    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:56:16.136032    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:56:20.059539    6981 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.5:22: connect: connection refused
	I0819 10:56:23.124072    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:56:23.124086    6981 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:56:23.124300    6981 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:56:23.124312    6981 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:56:23.124408    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.124489    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.124602    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.124703    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.124799    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.124929    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:23.125177    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:23.125191    6981 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:56:23.193884    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:56:23.193904    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.194038    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.194146    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.194270    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.194375    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.194519    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:23.194668    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:23.194679    6981 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:56:23.260785    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:56:23.260805    6981 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:56:23.260822    6981 buildroot.go:174] setting up certificates
	I0819 10:56:23.260827    6981 provision.go:84] configureAuth start
	I0819 10:56:23.260833    6981 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:56:23.260971    6981 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:56:23.261088    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.261187    6981 provision.go:143] copyHostCerts
	I0819 10:56:23.261218    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:56:23.261288    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:56:23.261297    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:56:23.261682    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:56:23.261905    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:56:23.261947    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:56:23.261952    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:56:23.262034    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:56:23.262219    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:56:23.262264    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:56:23.262269    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:56:23.262412    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:56:23.262580    6981 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:56:23.359637    6981 provision.go:177] copyRemoteCerts
	I0819 10:56:23.359688    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:56:23.359702    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.359820    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.359935    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.360020    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.360110    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:56:23.397504    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:56:23.397593    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 10:56:23.416728    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:56:23.416796    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:56:23.435752    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:56:23.435811    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:56:23.455175    6981 provision.go:87] duration metric: took 194.331491ms to configureAuth
	I0819 10:56:23.455187    6981 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:56:23.455360    6981 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:56:23.455376    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:23.455501    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.455584    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.455667    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.455746    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.455831    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.455934    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:23.456063    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:23.456071    6981 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:56:23.514630    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:56:23.514642    6981 buildroot.go:70] root file system type: tmpfs
	I0819 10:56:23.514729    6981 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:56:23.514740    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.514876    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.514985    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.515095    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.515177    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.515317    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:23.515460    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:23.515505    6981 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:56:23.584286    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:56:23.584316    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.584457    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.584543    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.584638    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.584728    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.584864    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:23.585007    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:23.585021    6981 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:56:25.275768    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:56:25.275783    6981 machine.go:96] duration metric: took 15.299049026s to provisionDockerMachine
	I0819 10:56:25.275795    6981 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:56:25.275802    6981 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:56:25.275811    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.275997    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:56:25.276027    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:25.276128    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:25.276239    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.276337    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:25.276414    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:56:25.321744    6981 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:56:25.325075    6981 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:56:25.325087    6981 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:56:25.325190    6981 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:56:25.325376    6981 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:56:25.325383    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:56:25.325584    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:56:25.333943    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:56:25.362067    6981 start.go:296] duration metric: took 86.262531ms for postStartSetup
	I0819 10:56:25.362093    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.362269    6981 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:56:25.362289    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:25.362385    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:25.362481    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.362573    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:25.362661    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:56:25.400061    6981 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:56:25.400122    6981 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:56:25.453367    6981 fix.go:56] duration metric: took 15.67346414s for fixHost
	I0819 10:56:25.453389    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:25.453522    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:25.453620    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.453724    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.453811    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:25.453937    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:25.454090    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:25.454097    6981 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:56:25.512221    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090185.541072493
	
	I0819 10:56:25.512233    6981 fix.go:216] guest clock: 1724090185.541072493
	I0819 10:56:25.512238    6981 fix.go:229] Guest: 2024-08-19 10:56:25.541072493 -0700 PDT Remote: 2024-08-19 10:56:25.453379 -0700 PDT m=+16.103011649 (delta=87.693493ms)
	I0819 10:56:25.512259    6981 fix.go:200] guest clock delta is within tolerance: 87.693493ms
	I0819 10:56:25.512269    6981 start.go:83] releasing machines lock for "ha-431000", held for 15.732401062s
	I0819 10:56:25.512292    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.512419    6981 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:56:25.512514    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.512822    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.512930    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.513011    6981 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:56:25.513042    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:25.513060    6981 ssh_runner.go:195] Run: cat /version.json
	I0819 10:56:25.513070    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:25.513140    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:25.513154    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:25.513205    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.513231    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.513284    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:25.513318    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:25.513357    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:56:25.513383    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:56:25.592288    6981 ssh_runner.go:195] Run: systemctl --version
	I0819 10:56:25.597153    6981 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:56:25.601380    6981 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:56:25.601424    6981 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:56:25.614660    6981 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:56:25.614671    6981 start.go:495] detecting cgroup driver to use...
	I0819 10:56:25.614767    6981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:56:25.631529    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:56:25.640397    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:56:25.649192    6981 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:56:25.649232    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:56:25.658096    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:56:25.666956    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:56:25.675821    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:56:25.684510    6981 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:56:25.693585    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:56:25.702323    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:56:25.715509    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:56:25.724687    6981 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:56:25.731994    6981 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:56:25.739249    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:25.828532    6981 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:56:25.848493    6981 start.go:495] detecting cgroup driver to use...
	I0819 10:56:25.848569    6981 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:56:25.863350    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:56:25.879011    6981 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:56:25.896262    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:56:25.907139    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:56:25.917546    6981 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:56:25.939914    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:56:25.950034    6981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:56:25.964691    6981 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:56:25.967669    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:56:25.974806    6981 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:56:25.988317    6981 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:56:26.081595    6981 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:56:26.191696    6981 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:56:26.191769    6981 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:56:26.205687    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:26.297875    6981 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:56:28.657143    6981 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.359206981s)
	I0819 10:56:28.657214    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:56:28.667753    6981 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:56:28.680506    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:56:28.690501    6981 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:56:28.783300    6981 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:56:28.887365    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:28.995138    6981 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:56:29.013380    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:56:29.023676    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:29.117464    6981 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:56:29.179606    6981 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:56:29.179685    6981 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:56:29.184114    6981 start.go:563] Will wait 60s for crictl version
	I0819 10:56:29.184165    6981 ssh_runner.go:195] Run: which crictl
	I0819 10:56:29.187049    6981 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:56:29.212932    6981 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:56:29.213012    6981 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:56:29.229631    6981 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:56:29.272737    6981 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:56:29.272789    6981 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:56:29.273156    6981 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:56:29.277848    6981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:56:29.287607    6981 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:56:29.287697    6981 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:56:29.287753    6981 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:56:29.301005    6981 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0819 10:56:29.301016    6981 docker.go:615] Images already preloaded, skipping extraction
	I0819 10:56:29.301094    6981 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:56:29.314630    6981 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0819 10:56:29.314644    6981 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:56:29.314653    6981 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:56:29.314737    6981 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:56:29.314807    6981 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:56:29.352431    6981 cni.go:84] Creating CNI manager for ""
	I0819 10:56:29.352444    6981 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 10:56:29.352456    6981 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:56:29.352472    6981 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:56:29.352556    6981 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:56:29.352570    6981 kube-vip.go:115] generating kube-vip config ...
	I0819 10:56:29.352619    6981 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:56:29.364946    6981 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:56:29.365018    6981 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:56:29.365072    6981 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:56:29.372661    6981 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:56:29.372708    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:56:29.380027    6981 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:56:29.393672    6981 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:56:29.406853    6981 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:56:29.420484    6981 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:56:29.433844    6981 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:56:29.436764    6981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:56:29.445878    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:29.540868    6981 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:56:29.555532    6981 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:56:29.555544    6981 certs.go:194] generating shared ca certs ...
	I0819 10:56:29.555554    6981 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:56:29.555749    6981 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:56:29.555835    6981 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:56:29.555845    6981 certs.go:256] generating profile certs ...
	I0819 10:56:29.555952    6981 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:56:29.556031    6981 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59
	I0819 10:56:29.556114    6981 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:56:29.556123    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:56:29.556144    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:56:29.556161    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:56:29.556184    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:56:29.556206    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:56:29.556235    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:56:29.556265    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:56:29.556283    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:56:29.556384    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:56:29.556431    6981 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:56:29.556440    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:56:29.556474    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:56:29.556508    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:56:29.556540    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:56:29.556611    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:56:29.556646    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:56:29.556667    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:56:29.556692    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:56:29.557189    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:56:29.599246    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:56:29.617881    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:56:29.636687    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:56:29.659252    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 10:56:29.692653    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 10:56:29.731841    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:56:29.799906    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:56:29.845242    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:56:29.877042    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:56:29.905021    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:56:29.944897    6981 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:56:29.979360    6981 ssh_runner.go:195] Run: openssl version
	I0819 10:56:29.985756    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:56:29.998027    6981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:56:30.002417    6981 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:56:30.002461    6981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:56:30.007997    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:56:30.022681    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:56:30.037160    6981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:56:30.042096    6981 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:56:30.042154    6981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:56:30.048983    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:56:30.060437    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:56:30.069476    6981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:56:30.072891    6981 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:56:30.072925    6981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:56:30.077193    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:56:30.086257    6981 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:56:30.089634    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 10:56:30.093907    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 10:56:30.098134    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 10:56:30.102491    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 10:56:30.106994    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 10:56:30.111242    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 10:56:30.115484    6981 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:56:30.115606    6981 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:56:30.135150    6981 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:56:30.143921    6981 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 10:56:30.143931    6981 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 10:56:30.143976    6981 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 10:56:30.152249    6981 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:56:30.152544    6981 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-431000" does not appear in /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:56:30.152629    6981 kubeconfig.go:62] /Users/jenkins/minikube-integration/19478-1622/kubeconfig needs updating (will repair): [kubeconfig missing "ha-431000" cluster setting kubeconfig missing "ha-431000" context setting]
	I0819 10:56:30.152837    6981 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:56:30.153454    6981 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:56:30.153654    6981 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdeec2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:56:30.153974    6981 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:56:30.154142    6981 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 10:56:30.162096    6981 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0819 10:56:30.162107    6981 kubeadm.go:597] duration metric: took 18.172014ms to restartPrimaryControlPlane
	I0819 10:56:30.162112    6981 kubeadm.go:394] duration metric: took 46.636783ms to StartCluster
	I0819 10:56:30.162124    6981 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:56:30.162205    6981 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:56:30.162583    6981 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:56:30.162809    6981 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:56:30.162822    6981 start.go:241] waiting for startup goroutines ...
	I0819 10:56:30.162833    6981 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:56:30.162953    6981 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:56:30.207316    6981 out.go:177] * Enabled addons: 
	I0819 10:56:30.229323    6981 addons.go:510] duration metric: took 66.491913ms for enable addons: enabled=[]
	I0819 10:56:30.229376    6981 start.go:246] waiting for cluster config update ...
	I0819 10:56:30.229387    6981 start.go:255] writing updated cluster config ...
	I0819 10:56:30.251212    6981 out.go:201] 
	I0819 10:56:30.272839    6981 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:56:30.272969    6981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:56:30.295470    6981 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:56:30.336958    6981 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:56:30.336992    6981 cache.go:56] Caching tarball of preloaded images
	I0819 10:56:30.337177    6981 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:56:30.337195    6981 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:56:30.337332    6981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:56:30.338308    6981 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:56:30.338435    6981 start.go:364] duration metric: took 98.75µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:56:30.338470    6981 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:56:30.338478    6981 fix.go:54] fixHost starting: m02
	I0819 10:56:30.338906    6981 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:30.338952    6981 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:30.348209    6981 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52276
	I0819 10:56:30.348566    6981 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:30.348941    6981 main.go:141] libmachine: Using API Version  1
	I0819 10:56:30.348955    6981 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:30.349205    6981 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:30.349316    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:30.349413    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:56:30.349494    6981 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:30.349575    6981 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6783
	I0819 10:56:30.350514    6981 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 6783 missing from process table
	I0819 10:56:30.350551    6981 fix.go:112] recreateIfNeeded on ha-431000-m02: state=Stopped err=<nil>
	I0819 10:56:30.350562    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	W0819 10:56:30.350646    6981 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:56:30.372317    6981 out.go:177] * Restarting existing hyperkit VM for "ha-431000-m02" ...
	I0819 10:56:30.414203    6981 main.go:141] libmachine: (ha-431000-m02) Calling .Start
	I0819 10:56:30.414469    6981 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:30.414521    6981 main.go:141] libmachine: (ha-431000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:56:30.416354    6981 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 6783 missing from process table
	I0819 10:56:30.416368    6981 main.go:141] libmachine: (ha-431000-m02) DBG | pid 6783 is in state "Stopped"
	I0819 10:56:30.416390    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid...
	I0819 10:56:30.416765    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:56:30.443708    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:56:30.443734    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:56:30.443894    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:56:30.443925    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:56:30.443967    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:56:30.444021    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:56:30.444046    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:56:30.445458    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: Pid is 7000
	I0819 10:56:30.445867    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:56:30.445892    6981 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:30.445941    6981 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 7000
	I0819 10:56:30.447945    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:56:30.448023    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:56:30.448039    6981 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 10:56:30.448056    6981 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 10:56:30.448068    6981 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d6bf}
	I0819 10:56:30.448081    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:56:30.448095    6981 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:56:30.448141    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:56:30.448849    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:56:30.449056    6981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:56:30.449547    6981 machine.go:93] provisionDockerMachine start ...
	I0819 10:56:30.449557    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:30.449675    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:30.449784    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:30.449881    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:30.449987    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:30.450088    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:30.450195    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:30.450353    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:30.450361    6981 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:56:30.453488    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:56:30.462353    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:56:30.463409    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:56:30.463422    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:56:30.463433    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:56:30.463443    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:56:30.845998    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:56:30.846010    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:56:30.960635    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:56:30.960655    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:56:30.960662    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:56:30.960688    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:56:30.961476    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:56:30.961486    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:56:36.544155    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:56:36.544211    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:56:36.544223    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:56:36.568477    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:56:41.505008    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:56:41.505022    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:56:41.505146    6981 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:56:41.505155    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:56:41.505234    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.505320    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:41.505407    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.505489    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.505567    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:41.505722    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:41.505871    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:41.505879    6981 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:56:41.565288    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:56:41.565303    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.565441    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:41.565542    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.565626    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.565709    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:41.565844    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:41.566011    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:41.566024    6981 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:56:41.623307    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:56:41.623322    6981 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:56:41.623330    6981 buildroot.go:174] setting up certificates
	I0819 10:56:41.623337    6981 provision.go:84] configureAuth start
	I0819 10:56:41.623343    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:56:41.623485    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:56:41.623593    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.623676    6981 provision.go:143] copyHostCerts
	I0819 10:56:41.623706    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:56:41.623762    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:56:41.623769    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:56:41.624200    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:56:41.624417    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:56:41.624448    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:56:41.624453    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:56:41.624522    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:56:41.624676    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:56:41.624707    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:56:41.624712    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:56:41.624782    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:56:41.624934    6981 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:56:41.834784    6981 provision.go:177] copyRemoteCerts
	I0819 10:56:41.834846    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:56:41.834860    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.835000    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:41.835091    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.835186    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:41.835288    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:56:41.866060    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:56:41.866147    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:56:41.885413    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:56:41.885478    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:56:41.904963    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:56:41.905035    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:56:41.924331    6981 provision.go:87] duration metric: took 300.981908ms to configureAuth
	I0819 10:56:41.924343    6981 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:56:41.924516    6981 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:56:41.924544    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:41.924686    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.924771    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:41.924843    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.924919    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.925004    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:41.925116    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:41.925233    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:41.925240    6981 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:56:41.974425    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:56:41.974437    6981 buildroot.go:70] root file system type: tmpfs
	I0819 10:56:41.974511    6981 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:56:41.974522    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.974649    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:41.974738    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.974832    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.974919    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:41.975042    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:41.975185    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:41.975231    6981 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:56:42.033848    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:56:42.033865    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:42.033997    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:42.034082    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:42.034173    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:42.034263    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:42.034398    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:42.034538    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:42.034551    6981 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:56:43.712997    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:56:43.713011    6981 machine.go:96] duration metric: took 13.263218761s to provisionDockerMachine
	I0819 10:56:43.713019    6981 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:56:43.713026    6981 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:56:43.713035    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.713216    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:56:43.713228    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:43.713316    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:43.713406    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.713493    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:43.713587    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:56:43.752505    6981 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:56:43.755744    6981 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:56:43.755757    6981 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:56:43.755860    6981 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:56:43.756028    6981 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:56:43.756035    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:56:43.756193    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:56:43.765051    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:56:43.793166    6981 start.go:296] duration metric: took 80.136725ms for postStartSetup
	I0819 10:56:43.793188    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.793370    6981 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:56:43.793383    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:43.793484    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:43.793569    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.793660    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:43.793746    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:56:43.825409    6981 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:56:43.825478    6981 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:56:43.879245    6981 fix.go:56] duration metric: took 13.540521433s for fixHost
	I0819 10:56:43.879270    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:43.879429    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:43.879530    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.879619    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.879705    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:43.879839    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:43.879983    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:43.879990    6981 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:56:43.929347    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090203.957099350
	
	I0819 10:56:43.929360    6981 fix.go:216] guest clock: 1724090203.957099350
	I0819 10:56:43.929369    6981 fix.go:229] Guest: 2024-08-19 10:56:43.95709935 -0700 PDT Remote: 2024-08-19 10:56:43.87926 -0700 PDT m=+34.528562496 (delta=77.83935ms)
	I0819 10:56:43.929380    6981 fix.go:200] guest clock delta is within tolerance: 77.83935ms
	I0819 10:56:43.929384    6981 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.590694355s
	I0819 10:56:43.929402    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.929528    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:56:43.953921    6981 out.go:177] * Found network options:
	I0819 10:56:43.974820    6981 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:56:43.996762    6981 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:56:43.996798    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.997626    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.997854    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.997980    6981 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:56:43.998031    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:56:43.998082    6981 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:56:43.998186    6981 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:56:43.998213    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:43.998289    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:43.998453    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.998507    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:43.998692    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:43.998739    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.998913    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:56:43.998935    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:43.999057    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:56:44.026646    6981 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:56:44.026708    6981 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:56:44.073369    6981 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:56:44.073399    6981 start.go:495] detecting cgroup driver to use...
	I0819 10:56:44.073500    6981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:56:44.089774    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:56:44.098097    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:56:44.106208    6981 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:56:44.106257    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:56:44.114280    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:56:44.122204    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:56:44.130272    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:56:44.138582    6981 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:56:44.147042    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:56:44.155299    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:56:44.163657    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:56:44.171914    6981 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:56:44.179280    6981 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:56:44.186999    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:44.285291    6981 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:56:44.305216    6981 start.go:495] detecting cgroup driver to use...
	I0819 10:56:44.305284    6981 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:56:44.329485    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:56:44.339850    6981 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:56:44.358582    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:56:44.369734    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:56:44.380526    6981 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:56:44.434117    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:56:44.444506    6981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:56:44.459620    6981 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:56:44.462637    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:56:44.469860    6981 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:56:44.483413    6981 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:56:44.579417    6981 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:56:44.683124    6981 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:56:44.683150    6981 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:56:44.697272    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:44.797185    6981 ssh_runner.go:195] Run: sudo systemctl restart docker

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-431000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000: exit status 2 (151.633363ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (2.301569435s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-2l9lq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT |                     |
	|         | busybox-7dff88458-wfcpq -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:41 PDT | 19 Aug 24 10:41 PDT |
	|         | busybox-7dff88458-x7m6m -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-2l9lq -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT |                     |
	|         | busybox-7dff88458-wfcpq              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | busybox-7dff88458-x7m6m -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| node    | add -p ha-431000 -v=7                | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:42 PDT | 19 Aug 24 10:42 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-431000 node stop m02 -v=7         | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:43 PDT | 19 Aug 24 10:43 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-431000 node start m02 -v=7        | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:45 PDT | 19 Aug 24 10:45 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-431000 -v=7               | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:46 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-431000 -v=7                    | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:46 PDT | 19 Aug 24 10:47 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-431000 --wait=true -v=7        | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:47 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-431000                    | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:52 PDT |                     |
	| node    | ha-431000 node delete m03 -v=7       | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:52 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | ha-431000 stop -v=7                  | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:54 PDT | 19 Aug 24 10:56 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-431000 --wait=true             | ha-431000 | jenkins | v1.33.1 | 19 Aug 24 10:56 PDT |                     |
	|         | -v=7 --alsologtostderr               |           |         |         |                     |                     |
	|         | --driver=hyperkit                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 10:56:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 10:56:09.387037    6981 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:56:09.387223    6981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:56:09.387229    6981 out.go:358] Setting ErrFile to fd 2...
	I0819 10:56:09.387232    6981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:56:09.387409    6981 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:56:09.388880    6981 out.go:352] Setting JSON to false
	I0819 10:56:09.411239    6981 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5139,"bootTime":1724085030,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:56:09.411338    6981 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:56:09.433409    6981 out.go:177] * [ha-431000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:56:09.476100    6981 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:56:09.476156    6981 notify.go:220] Checking for updates...
	I0819 10:56:09.518722    6981 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:56:09.539864    6981 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:56:09.561099    6981 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:56:09.582061    6981 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:56:09.603005    6981 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:56:09.624771    6981 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:56:09.625423    6981 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.625516    6981 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:09.635388    6981 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52249
	I0819 10:56:09.635748    6981 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:09.636177    6981 main.go:141] libmachine: Using API Version  1
	I0819 10:56:09.636189    6981 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:09.636396    6981 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:09.636519    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:09.636718    6981 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:56:09.636945    6981 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.636967    6981 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:09.645612    6981 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52251
	I0819 10:56:09.645982    6981 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:09.646319    6981 main.go:141] libmachine: Using API Version  1
	I0819 10:56:09.646343    6981 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:09.646562    6981 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:09.646665    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:09.675726    6981 out.go:177] * Using the hyperkit driver based on existing profile
	I0819 10:56:09.717938    6981 start.go:297] selected driver: hyperkit
	I0819 10:56:09.717966    6981 start.go:901] validating driver "hyperkit" against &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclas
s:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:56:09.718211    6981 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:56:09.718380    6981 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:56:09.718594    6981 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 10:56:09.728278    6981 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 10:56:09.732198    6981 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.732218    6981 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 10:56:09.734893    6981 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 10:56:09.734966    6981 cni.go:84] Creating CNI manager for ""
	I0819 10:56:09.734976    6981 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 10:56:09.735058    6981 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:56:09.735153    6981 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 10:56:09.756949    6981 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0819 10:56:09.778034    6981 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:56:09.778106    6981 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 10:56:09.778135    6981 cache.go:56] Caching tarball of preloaded images
	I0819 10:56:09.778324    6981 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:56:09.778344    6981 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:56:09.778524    6981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:56:09.779449    6981 start.go:360] acquireMachinesLock for ha-431000: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:56:09.779569    6981 start.go:364] duration metric: took 95.514µs to acquireMachinesLock for "ha-431000"
	I0819 10:56:09.779608    6981 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:56:09.779625    6981 fix.go:54] fixHost starting: 
	I0819 10:56:09.780035    6981 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:09.780080    6981 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:09.789228    6981 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52253
	I0819 10:56:09.789570    6981 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:09.789942    6981 main.go:141] libmachine: Using API Version  1
	I0819 10:56:09.789956    6981 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:09.790188    6981 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:09.790310    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:09.790421    6981 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0819 10:56:09.790499    6981 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:09.790583    6981 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6743
	I0819 10:56:09.791522    6981 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid 6743 missing from process table
	I0819 10:56:09.791559    6981 fix.go:112] recreateIfNeeded on ha-431000: state=Stopped err=<nil>
	I0819 10:56:09.791574    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	W0819 10:56:09.791672    6981 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:56:09.833730    6981 out.go:177] * Restarting existing hyperkit VM for "ha-431000" ...
	I0819 10:56:09.854892    6981 main.go:141] libmachine: (ha-431000) Calling .Start
	I0819 10:56:09.855160    6981 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:09.855203    6981 main.go:141] libmachine: (ha-431000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid
	I0819 10:56:09.857060    6981 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid 6743 missing from process table
	I0819 10:56:09.857081    6981 main.go:141] libmachine: (ha-431000) DBG | pid 6743 is in state "Stopped"
	I0819 10:56:09.857096    6981 main.go:141] libmachine: (ha-431000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid...
	I0819 10:56:09.857535    6981 main.go:141] libmachine: (ha-431000) DBG | Using UUID 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c
	I0819 10:56:09.970561    6981 main.go:141] libmachine: (ha-431000) DBG | Generated MAC b2:ad:7c:2f:19:d9
	I0819 10:56:09.970590    6981 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:56:09.970672    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8c00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:56:09.970699    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8c00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:56:09.970748    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f8450f1-36fc-4fbb-b5d6-699bdfe1640c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:56:09.970788    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f8450f1-36fc-4fbb-b5d6-699bdfe1640c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:56:09.970807    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:56:09.972280    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 DEBUG: hyperkit: Pid is 6995
	I0819 10:56:09.972670    6981 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0819 10:56:09.972685    6981 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:09.972868    6981 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 6995
	I0819 10:56:09.974774    6981 main.go:141] libmachine: (ha-431000) DBG | Searching for b2:ad:7c:2f:19:d9 in /var/db/dhcpd_leases ...
	I0819 10:56:09.974861    6981 main.go:141] libmachine: (ha-431000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:56:09.974891    6981 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 10:56:09.974908    6981 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d6bf}
	I0819 10:56:09.974929    6981 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d6ab}
	I0819 10:56:09.974944    6981 main.go:141] libmachine: (ha-431000) DBG | Found match: b2:ad:7c:2f:19:d9
	I0819 10:56:09.974967    6981 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.5
	I0819 10:56:09.975005    6981 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0819 10:56:09.975805    6981 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:56:09.975993    6981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:56:09.976453    6981 machine.go:93] provisionDockerMachine start ...
	I0819 10:56:09.976463    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:09.976570    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:09.976688    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:09.976807    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:09.976913    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:09.977033    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:09.977172    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:09.977450    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:09.977460    6981 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:56:09.980166    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:56:10.032027    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:56:10.032759    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:56:10.032774    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:56:10.032792    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:56:10.032806    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:56:10.411967    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:56:10.411990    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:56:10.526438    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:56:10.526455    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:56:10.526465    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:56:10.526476    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:56:10.527428    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:56:10.527460    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:56:16.111682    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0819 10:56:16.111715    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0819 10:56:16.111723    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0819 10:56:16.136032    6981 main.go:141] libmachine: (ha-431000) DBG | 2024/08/19 10:56:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0819 10:56:20.059539    6981 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.5:22: connect: connection refused
	I0819 10:56:23.124072    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:56:23.124086    6981 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:56:23.124300    6981 buildroot.go:166] provisioning hostname "ha-431000"
	I0819 10:56:23.124312    6981 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:56:23.124408    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.124489    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.124602    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.124703    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.124799    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.124929    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:23.125177    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:23.125191    6981 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0819 10:56:23.193884    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0819 10:56:23.193904    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.194038    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.194146    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.194270    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.194375    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.194519    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:23.194668    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:23.194679    6981 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:56:23.260785    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:56:23.260805    6981 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:56:23.260822    6981 buildroot.go:174] setting up certificates
	I0819 10:56:23.260827    6981 provision.go:84] configureAuth start
	I0819 10:56:23.260833    6981 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0819 10:56:23.260971    6981 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:56:23.261088    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.261187    6981 provision.go:143] copyHostCerts
	I0819 10:56:23.261218    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:56:23.261288    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:56:23.261297    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:56:23.261682    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:56:23.261905    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:56:23.261947    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:56:23.261952    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:56:23.262034    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:56:23.262219    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:56:23.262264    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:56:23.262269    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:56:23.262412    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:56:23.262580    6981 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.5 ha-431000 localhost minikube]
	I0819 10:56:23.359637    6981 provision.go:177] copyRemoteCerts
	I0819 10:56:23.359688    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:56:23.359702    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.359820    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.359935    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.360020    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.360110    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:56:23.397504    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:56:23.397593    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 10:56:23.416728    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:56:23.416796    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:56:23.435752    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:56:23.435811    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:56:23.455175    6981 provision.go:87] duration metric: took 194.331491ms to configureAuth
	I0819 10:56:23.455187    6981 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:56:23.455360    6981 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:56:23.455376    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:23.455501    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.455584    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.455667    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.455746    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.455831    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.455934    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:23.456063    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:23.456071    6981 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:56:23.514630    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:56:23.514642    6981 buildroot.go:70] root file system type: tmpfs
	I0819 10:56:23.514729    6981 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:56:23.514740    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.514876    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.514985    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.515095    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.515177    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.515317    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:23.515460    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:23.515505    6981 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:56:23.584286    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:56:23.584316    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:23.584457    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:23.584543    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.584638    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:23.584728    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:23.584864    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:23.585007    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:23.585021    6981 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:56:25.275768    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:56:25.275783    6981 machine.go:96] duration metric: took 15.299049026s to provisionDockerMachine
	I0819 10:56:25.275795    6981 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0819 10:56:25.275802    6981 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:56:25.275811    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.275997    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:56:25.276027    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:25.276128    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:25.276239    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.276337    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:25.276414    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:56:25.321744    6981 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:56:25.325075    6981 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:56:25.325087    6981 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:56:25.325190    6981 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:56:25.325376    6981 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:56:25.325383    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:56:25.325584    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:56:25.333943    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:56:25.362067    6981 start.go:296] duration metric: took 86.262531ms for postStartSetup
	I0819 10:56:25.362093    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.362269    6981 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:56:25.362289    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:25.362385    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:25.362481    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.362573    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:25.362661    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:56:25.400061    6981 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:56:25.400122    6981 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:56:25.453367    6981 fix.go:56] duration metric: took 15.67346414s for fixHost
	I0819 10:56:25.453389    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:25.453522    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:25.453620    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.453724    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.453811    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:25.453937    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:25.454090    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0819 10:56:25.454097    6981 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:56:25.512221    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090185.541072493
	
	I0819 10:56:25.512233    6981 fix.go:216] guest clock: 1724090185.541072493
	I0819 10:56:25.512238    6981 fix.go:229] Guest: 2024-08-19 10:56:25.541072493 -0700 PDT Remote: 2024-08-19 10:56:25.453379 -0700 PDT m=+16.103011649 (delta=87.693493ms)
	I0819 10:56:25.512259    6981 fix.go:200] guest clock delta is within tolerance: 87.693493ms
	I0819 10:56:25.512269    6981 start.go:83] releasing machines lock for "ha-431000", held for 15.732401062s
	I0819 10:56:25.512292    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.512419    6981 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:56:25.512514    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.512822    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.512930    6981 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0819 10:56:25.513011    6981 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:56:25.513042    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:25.513060    6981 ssh_runner.go:195] Run: cat /version.json
	I0819 10:56:25.513070    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0819 10:56:25.513140    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:25.513154    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0819 10:56:25.513205    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.513231    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0819 10:56:25.513284    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:25.513318    6981 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0819 10:56:25.513357    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:56:25.513383    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0819 10:56:25.592288    6981 ssh_runner.go:195] Run: systemctl --version
	I0819 10:56:25.597153    6981 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 10:56:25.601380    6981 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:56:25.601424    6981 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:56:25.614660    6981 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:56:25.614671    6981 start.go:495] detecting cgroup driver to use...
	I0819 10:56:25.614767    6981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:56:25.631529    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:56:25.640397    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:56:25.649192    6981 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:56:25.649232    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:56:25.658096    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:56:25.666956    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:56:25.675821    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:56:25.684510    6981 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:56:25.693585    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:56:25.702323    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:56:25.715509    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:56:25.724687    6981 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:56:25.731994    6981 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:56:25.739249    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:25.828532    6981 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:56:25.848493    6981 start.go:495] detecting cgroup driver to use...
	I0819 10:56:25.848569    6981 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:56:25.863350    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:56:25.879011    6981 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:56:25.896262    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:56:25.907139    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:56:25.917546    6981 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:56:25.939914    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:56:25.950034    6981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:56:25.964691    6981 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:56:25.967669    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:56:25.974806    6981 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:56:25.988317    6981 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:56:26.081595    6981 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:56:26.191696    6981 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:56:26.191769    6981 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:56:26.205687    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:26.297875    6981 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 10:56:28.657143    6981 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.359206981s)
	I0819 10:56:28.657214    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 10:56:28.667753    6981 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 10:56:28.680506    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:56:28.690501    6981 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 10:56:28.783300    6981 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 10:56:28.887365    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:28.995138    6981 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 10:56:29.013380    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 10:56:29.023676    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:29.117464    6981 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 10:56:29.179606    6981 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 10:56:29.179685    6981 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 10:56:29.184114    6981 start.go:563] Will wait 60s for crictl version
	I0819 10:56:29.184165    6981 ssh_runner.go:195] Run: which crictl
	I0819 10:56:29.187049    6981 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 10:56:29.212932    6981 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0819 10:56:29.213012    6981 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:56:29.229631    6981 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 10:56:29.272737    6981 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0819 10:56:29.272789    6981 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0819 10:56:29.273156    6981 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0819 10:56:29.277848    6981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:56:29.287607    6981 kubeadm.go:883] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 10:56:29.287697    6981 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:56:29.287753    6981 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:56:29.301005    6981 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0819 10:56:29.301016    6981 docker.go:615] Images already preloaded, skipping extraction
	I0819 10:56:29.301094    6981 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 10:56:29.314630    6981 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0819 10:56:29.314644    6981 cache_images.go:84] Images are preloaded, skipping loading
	I0819 10:56:29.314653    6981 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0819 10:56:29.314737    6981 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 10:56:29.314807    6981 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 10:56:29.352431    6981 cni.go:84] Creating CNI manager for ""
	I0819 10:56:29.352444    6981 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 10:56:29.352456    6981 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 10:56:29.352472    6981 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 10:56:29.352556    6981 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 10:56:29.352570    6981 kube-vip.go:115] generating kube-vip config ...
	I0819 10:56:29.352619    6981 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 10:56:29.364946    6981 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 10:56:29.365018    6981 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 10:56:29.365072    6981 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 10:56:29.372661    6981 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 10:56:29.372708    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 10:56:29.380027    6981 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0819 10:56:29.393672    6981 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 10:56:29.406853    6981 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0819 10:56:29.420484    6981 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0819 10:56:29.433844    6981 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0819 10:56:29.436764    6981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 10:56:29.445878    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:29.540868    6981 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 10:56:29.555532    6981 certs.go:68] Setting up /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000 for IP: 192.169.0.5
	I0819 10:56:29.555544    6981 certs.go:194] generating shared ca certs ...
	I0819 10:56:29.555554    6981 certs.go:226] acquiring lock for ca certs: {Name:mk14b1fc026e35e37547224913a7cb83f2bf507a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:56:29.555749    6981 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key
	I0819 10:56:29.555835    6981 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key
	I0819 10:56:29.555845    6981 certs.go:256] generating profile certs ...
	I0819 10:56:29.555952    6981 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key
	I0819 10:56:29.556031    6981 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key.cbca8d59
	I0819 10:56:29.556114    6981 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key
	I0819 10:56:29.556123    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 10:56:29.556144    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 10:56:29.556161    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 10:56:29.556184    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 10:56:29.556206    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 10:56:29.556235    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 10:56:29.556265    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 10:56:29.556283    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 10:56:29.556384    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem (1338 bytes)
	W0819 10:56:29.556431    6981 certs.go:480] ignoring /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0819 10:56:29.556440    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 10:56:29.556474    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem (1082 bytes)
	I0819 10:56:29.556508    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem (1123 bytes)
	I0819 10:56:29.556540    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem (1679 bytes)
	I0819 10:56:29.556611    6981 certs.go:484] found cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:56:29.556646    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:56:29.556667    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem -> /usr/share/ca-certificates/2174.pem
	I0819 10:56:29.556692    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /usr/share/ca-certificates/21742.pem
	I0819 10:56:29.557189    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 10:56:29.599246    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 10:56:29.617881    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 10:56:29.636687    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 10:56:29.659252    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 10:56:29.692653    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 10:56:29.731841    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 10:56:29.799906    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 10:56:29.845242    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 10:56:29.877042    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0819 10:56:29.905021    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0819 10:56:29.944897    6981 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 10:56:29.979360    6981 ssh_runner.go:195] Run: openssl version
	I0819 10:56:29.985756    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0819 10:56:29.998027    6981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0819 10:56:30.002417    6981 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:01 /usr/share/ca-certificates/21742.pem
	I0819 10:56:30.002461    6981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0819 10:56:30.007997    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 10:56:30.022681    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 10:56:30.037160    6981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:56:30.042096    6981 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:56:30.042154    6981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 10:56:30.048983    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 10:56:30.060437    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0819 10:56:30.069476    6981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0819 10:56:30.072891    6981 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:01 /usr/share/ca-certificates/2174.pem
	I0819 10:56:30.072925    6981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0819 10:56:30.077193    6981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0819 10:56:30.086257    6981 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 10:56:30.089634    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 10:56:30.093907    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 10:56:30.098134    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 10:56:30.102491    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 10:56:30.106994    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 10:56:30.111242    6981 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 10:56:30.115484    6981 kubeadm.go:392] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:56:30.115606    6981 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 10:56:30.135150    6981 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 10:56:30.143921    6981 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 10:56:30.143931    6981 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 10:56:30.143976    6981 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 10:56:30.152249    6981 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 10:56:30.152544    6981 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-431000" does not appear in /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:56:30.152629    6981 kubeconfig.go:62] /Users/jenkins/minikube-integration/19478-1622/kubeconfig needs updating (will repair): [kubeconfig missing "ha-431000" cluster setting kubeconfig missing "ha-431000" context setting]
	I0819 10:56:30.152837    6981 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:56:30.153454    6981 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:56:30.153654    6981 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/19478-1622/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdeec2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 10:56:30.153974    6981 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 10:56:30.154142    6981 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 10:56:30.162096    6981 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0819 10:56:30.162107    6981 kubeadm.go:597] duration metric: took 18.172014ms to restartPrimaryControlPlane
	I0819 10:56:30.162112    6981 kubeadm.go:394] duration metric: took 46.636783ms to StartCluster
	I0819 10:56:30.162124    6981 settings.go:142] acquiring lock: {Name:mkb22512113a0bd29ba5c621b486982b538d8cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:56:30.162205    6981 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:56:30.162583    6981 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/kubeconfig: {Name:mkcfa71f7ad79a7af5c50bbdb1b5294fa9b27a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 10:56:30.162809    6981 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 10:56:30.162822    6981 start.go:241] waiting for startup goroutines ...
	I0819 10:56:30.162833    6981 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 10:56:30.162953    6981 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:56:30.207316    6981 out.go:177] * Enabled addons: 
	I0819 10:56:30.229323    6981 addons.go:510] duration metric: took 66.491913ms for enable addons: enabled=[]
	I0819 10:56:30.229376    6981 start.go:246] waiting for cluster config update ...
	I0819 10:56:30.229387    6981 start.go:255] writing updated cluster config ...
	I0819 10:56:30.251212    6981 out.go:201] 
	I0819 10:56:30.272839    6981 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:56:30.272969    6981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:56:30.295470    6981 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0819 10:56:30.336958    6981 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 10:56:30.336992    6981 cache.go:56] Caching tarball of preloaded images
	I0819 10:56:30.337177    6981 preload.go:172] Found /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0819 10:56:30.337195    6981 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 10:56:30.337332    6981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:56:30.338308    6981 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mk8fd532700d1d4bbb218fbc3d7b94112d0b956a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 10:56:30.338435    6981 start.go:364] duration metric: took 98.75µs to acquireMachinesLock for "ha-431000-m02"
	I0819 10:56:30.338470    6981 start.go:96] Skipping create...Using existing machine configuration
	I0819 10:56:30.338478    6981 fix.go:54] fixHost starting: m02
	I0819 10:56:30.338906    6981 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:56:30.338952    6981 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:56:30.348209    6981 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52276
	I0819 10:56:30.348566    6981 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:56:30.348941    6981 main.go:141] libmachine: Using API Version  1
	I0819 10:56:30.348955    6981 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:56:30.349205    6981 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:56:30.349316    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:30.349413    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0819 10:56:30.349494    6981 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:30.349575    6981 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 6783
	I0819 10:56:30.350514    6981 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 6783 missing from process table
	I0819 10:56:30.350551    6981 fix.go:112] recreateIfNeeded on ha-431000-m02: state=Stopped err=<nil>
	I0819 10:56:30.350562    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	W0819 10:56:30.350646    6981 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 10:56:30.372317    6981 out.go:177] * Restarting existing hyperkit VM for "ha-431000-m02" ...
	I0819 10:56:30.414203    6981 main.go:141] libmachine: (ha-431000-m02) Calling .Start
	I0819 10:56:30.414469    6981 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:30.414521    6981 main.go:141] libmachine: (ha-431000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0819 10:56:30.416354    6981 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 6783 missing from process table
	I0819 10:56:30.416368    6981 main.go:141] libmachine: (ha-431000-m02) DBG | pid 6783 is in state "Stopped"
	I0819 10:56:30.416390    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid...
	I0819 10:56:30.416765    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID decf6192-ca77-4e23-95db-084dbcc69753
	I0819 10:56:30.443708    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC 5a:74:68:47:b9:72
	I0819 10:56:30.443734    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0819 10:56:30.443894    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:56:30.443925    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"decf6192-ca77-4e23-95db-084dbcc69753", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003beb40)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0819 10:56:30.443967    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "decf6192-ca77-4e23-95db-084dbcc69753", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machine
s/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0819 10:56:30.444021    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U decf6192-ca77-4e23-95db-084dbcc69753 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0819 10:56:30.444046    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0819 10:56:30.445458    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 DEBUG: hyperkit: Pid is 7000
	I0819 10:56:30.445867    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0819 10:56:30.445892    6981 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 10:56:30.445941    6981 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 7000
	I0819 10:56:30.447945    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for 5a:74:68:47:b9:72 in /var/db/dhcpd_leases ...
	I0819 10:56:30.448023    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0819 10:56:30.448039    6981 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:b2:ad:7c:2f:19:d9 ID:1,b2:ad:7c:2f:19:d9 Lease:0x66c4d8c3}
	I0819 10:56:30.448056    6981 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f6:29:ff:43:e4:63 ID:1,f6:29:ff:43:e4:63 Lease:0x66c38727}
	I0819 10:56:30.448068    6981 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:5a:74:68:47:b9:72 ID:1,5a:74:68:47:b9:72 Lease:0x66c4d6bf}
	I0819 10:56:30.448081    6981 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: 5a:74:68:47:b9:72
	I0819 10:56:30.448095    6981 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.6
	I0819 10:56:30.448141    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0819 10:56:30.448849    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:56:30.449056    6981 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/ha-431000/config.json ...
	I0819 10:56:30.449547    6981 machine.go:93] provisionDockerMachine start ...
	I0819 10:56:30.449557    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:30.449675    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:30.449784    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:30.449881    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:30.449987    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:30.450088    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:30.450195    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:30.450353    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:30.450361    6981 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 10:56:30.453488    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0819 10:56:30.462353    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0819 10:56:30.463409    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:56:30.463422    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:56:30.463433    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:56:30.463443    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:56:30.845998    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0819 10:56:30.846010    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0819 10:56:30.960635    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0819 10:56:30.960655    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0819 10:56:30.960662    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0819 10:56:30.960688    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0819 10:56:30.961476    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0819 10:56:30.961486    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0819 10:56:36.544155    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:36 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0819 10:56:36.544211    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:36 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0819 10:56:36.544223    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:36 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0819 10:56:36.568477    6981 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/08/19 10:56:36 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0819 10:56:41.505008    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 10:56:41.505022    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:56:41.505146    6981 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0819 10:56:41.505155    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:56:41.505234    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.505320    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:41.505407    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.505489    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.505567    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:41.505722    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:41.505871    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:41.505879    6981 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0819 10:56:41.565288    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0819 10:56:41.565303    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.565441    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:41.565542    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.565626    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.565709    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:41.565844    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:41.566011    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:41.566024    6981 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 10:56:41.623307    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 10:56:41.623322    6981 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19478-1622/.minikube CaCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19478-1622/.minikube}
	I0819 10:56:41.623330    6981 buildroot.go:174] setting up certificates
	I0819 10:56:41.623337    6981 provision.go:84] configureAuth start
	I0819 10:56:41.623343    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0819 10:56:41.623485    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:56:41.623593    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.623676    6981 provision.go:143] copyHostCerts
	I0819 10:56:41.623706    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:56:41.623762    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem, removing ...
	I0819 10:56:41.623769    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem
	I0819 10:56:41.624200    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/cert.pem (1123 bytes)
	I0819 10:56:41.624417    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:56:41.624448    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem, removing ...
	I0819 10:56:41.624453    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem
	I0819 10:56:41.624522    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/key.pem (1679 bytes)
	I0819 10:56:41.624676    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:56:41.624707    6981 exec_runner.go:144] found /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem, removing ...
	I0819 10:56:41.624712    6981 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem
	I0819 10:56:41.624782    6981 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19478-1622/.minikube/ca.pem (1082 bytes)
	I0819 10:56:41.624934    6981 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.6 ha-431000-m02 localhost minikube]
	I0819 10:56:41.834784    6981 provision.go:177] copyRemoteCerts
	I0819 10:56:41.834846    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 10:56:41.834860    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.835000    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:41.835091    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.835186    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:41.835288    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:56:41.866060    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 10:56:41.866147    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 10:56:41.885413    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 10:56:41.885478    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 10:56:41.904963    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 10:56:41.905035    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 10:56:41.924331    6981 provision.go:87] duration metric: took 300.981908ms to configureAuth
	I0819 10:56:41.924343    6981 buildroot.go:189] setting minikube options for container-runtime
	I0819 10:56:41.924516    6981 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:56:41.924544    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:41.924686    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.924771    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:41.924843    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.924919    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.925004    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:41.925116    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:41.925233    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:41.925240    6981 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 10:56:41.974425    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 10:56:41.974437    6981 buildroot.go:70] root file system type: tmpfs
	I0819 10:56:41.974511    6981 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 10:56:41.974522    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:41.974649    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:41.974738    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.974832    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:41.974919    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:41.975042    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:41.975185    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:41.975231    6981 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 10:56:42.033848    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 10:56:42.033865    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:42.033997    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:42.034082    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:42.034173    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:42.034263    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:42.034398    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:42.034538    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:42.034551    6981 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 10:56:43.712997    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 10:56:43.713011    6981 machine.go:96] duration metric: took 13.263218761s to provisionDockerMachine
	I0819 10:56:43.713019    6981 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0819 10:56:43.713026    6981 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 10:56:43.713035    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.713216    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 10:56:43.713228    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:43.713316    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:43.713406    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.713493    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:43.713587    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:56:43.752505    6981 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 10:56:43.755744    6981 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 10:56:43.755757    6981 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/addons for local assets ...
	I0819 10:56:43.755860    6981 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19478-1622/.minikube/files for local assets ...
	I0819 10:56:43.756028    6981 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0819 10:56:43.756035    6981 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem -> /etc/ssl/certs/21742.pem
	I0819 10:56:43.756193    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 10:56:43.765051    6981 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0819 10:56:43.793166    6981 start.go:296] duration metric: took 80.136725ms for postStartSetup
	I0819 10:56:43.793188    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.793370    6981 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 10:56:43.793383    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:43.793484    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:43.793569    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.793660    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:43.793746    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:56:43.825409    6981 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0819 10:56:43.825478    6981 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0819 10:56:43.879245    6981 fix.go:56] duration metric: took 13.540521433s for fixHost
	I0819 10:56:43.879270    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:43.879429    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:43.879530    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.879619    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.879705    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:43.879839    6981 main.go:141] libmachine: Using SSH client type: native
	I0819 10:56:43.879983    6981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc832ea0] 0xc835c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0819 10:56:43.879990    6981 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 10:56:43.929347    6981 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090203.957099350
	
	I0819 10:56:43.929360    6981 fix.go:216] guest clock: 1724090203.957099350
	I0819 10:56:43.929369    6981 fix.go:229] Guest: 2024-08-19 10:56:43.95709935 -0700 PDT Remote: 2024-08-19 10:56:43.87926 -0700 PDT m=+34.528562496 (delta=77.83935ms)
	I0819 10:56:43.929380    6981 fix.go:200] guest clock delta is within tolerance: 77.83935ms
	I0819 10:56:43.929384    6981 start.go:83] releasing machines lock for "ha-431000-m02", held for 13.590694355s
	I0819 10:56:43.929402    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.929528    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0819 10:56:43.953921    6981 out.go:177] * Found network options:
	I0819 10:56:43.974820    6981 out.go:177]   - NO_PROXY=192.169.0.5
	W0819 10:56:43.996762    6981 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:56:43.996798    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.997626    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.997854    6981 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0819 10:56:43.997980    6981 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 10:56:43.998031    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0819 10:56:43.998082    6981 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 10:56:43.998186    6981 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 10:56:43.998213    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0819 10:56:43.998289    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:43.998453    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.998507    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0819 10:56:43.998692    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:43.998739    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0819 10:56:43.998913    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0819 10:56:43.998935    6981 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0819 10:56:43.999057    6981 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0819 10:56:44.026646    6981 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 10:56:44.026708    6981 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 10:56:44.073369    6981 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 10:56:44.073399    6981 start.go:495] detecting cgroup driver to use...
	I0819 10:56:44.073500    6981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:56:44.089774    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 10:56:44.098097    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 10:56:44.106208    6981 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 10:56:44.106257    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 10:56:44.114280    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:56:44.122204    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 10:56:44.130272    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 10:56:44.138582    6981 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 10:56:44.147042    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 10:56:44.155299    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 10:56:44.163657    6981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 10:56:44.171914    6981 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 10:56:44.179280    6981 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 10:56:44.186999    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:44.285291    6981 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 10:56:44.305216    6981 start.go:495] detecting cgroup driver to use...
	I0819 10:56:44.305284    6981 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 10:56:44.329485    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:56:44.339850    6981 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 10:56:44.358582    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 10:56:44.369734    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:56:44.380526    6981 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 10:56:44.434117    6981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 10:56:44.444506    6981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 10:56:44.459620    6981 ssh_runner.go:195] Run: which cri-dockerd
	I0819 10:56:44.462637    6981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 10:56:44.469860    6981 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0819 10:56:44.483413    6981 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 10:56:44.579417    6981 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 10:56:44.683124    6981 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 10:56:44.683150    6981 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 10:56:44.697272    6981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 10:56:44.797185    6981 ssh_runner.go:195] Run: sudo systemctl restart docker
	
	
	==> Docker <==
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.590628022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.590723998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.590733538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.590978369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.600761071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.600843089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.600852094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.600916449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.608760969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.608880596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.608893308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:56:36 ha-431000 dockerd[1192]: time="2024-08-19T17:56:36.609217747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:56:57 ha-431000 dockerd[1186]: time="2024-08-19T17:56:57.618025390Z" level=info msg="ignoring event" container=3ff38983436539d7eabb93160f708961fc1fba49a35da5b1be83efe18870dfaa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:56:57 ha-431000 dockerd[1192]: time="2024-08-19T17:56:57.618192294Z" level=info msg="shim disconnected" id=3ff38983436539d7eabb93160f708961fc1fba49a35da5b1be83efe18870dfaa namespace=moby
	Aug 19 17:56:57 ha-431000 dockerd[1192]: time="2024-08-19T17:56:57.618276171Z" level=warning msg="cleaning up after shim disconnected" id=3ff38983436539d7eabb93160f708961fc1fba49a35da5b1be83efe18870dfaa namespace=moby
	Aug 19 17:56:57 ha-431000 dockerd[1192]: time="2024-08-19T17:56:57.618285424Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:56:57 ha-431000 dockerd[1192]: time="2024-08-19T17:56:57.628664694Z" level=warning msg="cleanup warnings time=\"2024-08-19T17:56:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 19 17:56:58 ha-431000 dockerd[1192]: time="2024-08-19T17:56:58.626899881Z" level=info msg="shim disconnected" id=fa34673794871b77218e4d894f632eb15e6bd6210bc5a2d23f20c36bd1f2270d namespace=moby
	Aug 19 17:56:58 ha-431000 dockerd[1192]: time="2024-08-19T17:56:58.626992618Z" level=warning msg="cleaning up after shim disconnected" id=fa34673794871b77218e4d894f632eb15e6bd6210bc5a2d23f20c36bd1f2270d namespace=moby
	Aug 19 17:56:58 ha-431000 dockerd[1192]: time="2024-08-19T17:56:58.627003785Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 19 17:56:58 ha-431000 dockerd[1186]: time="2024-08-19T17:56:58.631620349Z" level=info msg="ignoring event" container=fa34673794871b77218e4d894f632eb15e6bd6210bc5a2d23f20c36bd1f2270d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 19 17:57:08 ha-431000 dockerd[1192]: time="2024-08-19T17:57:08.982793634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 17:57:08 ha-431000 dockerd[1192]: time="2024-08-19T17:57:08.990915244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 17:57:08 ha-431000 dockerd[1192]: time="2024-08-19T17:57:08.990926721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 17:57:08 ha-431000 dockerd[1192]: time="2024-08-19T17:57:08.991214862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	abdb280356686       045733566833c       2 seconds ago       Running             kube-controller-manager   4                   bbd7e5cdb0f48       kube-controller-manager-ha-431000
	41a26b3da0f51       38af8ddebf499       34 seconds ago      Running             kube-vip                  1                   9d3b66b13c7ae       kube-vip-ha-431000
	47db9cfc528d1       1766f54c897f0       34 seconds ago      Running             kube-scheduler            2                   0fa4db9de9c3d       kube-scheduler-ha-431000
	fa34673794871       045733566833c       34 seconds ago      Exited              kube-controller-manager   3                   bbd7e5cdb0f48       kube-controller-manager-ha-431000
	3ff3898343653       604f5db92eaa8       34 seconds ago      Exited              kube-apiserver            4                   d38f14a0fa42e       kube-apiserver-ha-431000
	172f3186b2803       2e96e5913fc06       34 seconds ago      Running             etcd                      2                   90d8b3d5319aa       etcd-ha-431000
	bcf3cd19406a4       6e38f40d628db       8 minutes ago       Exited              storage-provisioner       3                   19da8eae0d48a       storage-provisioner
	414908be37c88       8c811b4aec35f       8 minutes ago       Exited              busybox                   1                   fd28a05caf8d7       busybox-7dff88458-x7m6m
	51e18fb0428a6       12968670680f4       8 minutes ago       Exited              kindnet-cni               1                   bb2d3a2636faf       kindnet-lvdbg
	d7843c76d3e01       cbb01a7bd410d       8 minutes ago       Exited              coredns                   1                   ca4ec932efa63       coredns-6f6b679f8f-vc76p
	29764bad0bc90       cbb01a7bd410d       8 minutes ago       Exited              coredns                   1                   1d64ea8ea4f81       coredns-6f6b679f8f-hr2qx
	5636b94096fee       ad83b2ca7b09e       8 minutes ago       Exited              kube-proxy                1                   5627589c9455b       kube-proxy-5l56s
	11f4d59b4fb1d       38af8ddebf499       9 minutes ago       Exited              kube-vip                  0                   43fb644937b95       kube-vip-ha-431000
	dea4f29e78603       1766f54c897f0       9 minutes ago       Exited              kube-scheduler            1                   9e839ed84518f       kube-scheduler-ha-431000
	1bac9a6bc6836       2e96e5913fc06       9 minutes ago       Exited              etcd                      1                   c143d60007e3b       etcd-ha-431000
	
	
	==> coredns [29764bad0bc9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57280 - 39922 "HINFO IN 6598223870971274302.2706221343910350861. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01011612s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[281575694]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.217) (total time: 30003ms):
	Trace[281575694]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:48:54.221)
	Trace[281575694]: [30.003763494s] [30.003763494s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1147384648]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.218) (total time: 30003ms):
	Trace[1147384648]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:48:54.221)
	Trace[1147384648]: [30.003739495s] [30.003739495s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[953244717]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.220) (total time: 30001ms):
	Trace[953244717]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:48:54.221)
	Trace[953244717]: [30.001122159s] [30.001122159s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d7843c76d3e0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52034 - 20734 "HINFO IN 58890247287997822.7011696019754483361. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.010598723s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[901481756]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.217) (total time: 30003ms):
	Trace[901481756]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (17:48:54.220)
	Trace[901481756]: [30.003857838s] [30.003857838s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1030491669]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.220) (total time: 30001ms):
	Trace[1030491669]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:48:54.221)
	Trace[1030491669]: [30.001096527s] [30.001096527s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1524033155]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 17:48:24.217) (total time: 30003ms):
	Trace[1524033155]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:48:54.220)
	Trace[1524033155]: [30.003971024s] [30.003971024s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0819 17:57:10.644833    2520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0819 17:57:10.647216    2520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0819 17:57:10.649389    2520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0819 17:57:10.651147    2520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0819 17:57:10.652603    2520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035070] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008031] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.688007] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006923] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.783873] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.221939] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.541350] systemd-fstab-generator[475]: Ignoring "noauto" option for root device
	[  +0.096759] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +1.290044] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.699086] systemd-fstab-generator[1113]: Ignoring "noauto" option for root device
	[  +0.249827] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +0.103861] systemd-fstab-generator[1164]: Ignoring "noauto" option for root device
	[  +0.112397] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +2.487596] systemd-fstab-generator[1392]: Ignoring "noauto" option for root device
	[  +0.102335] systemd-fstab-generator[1404]: Ignoring "noauto" option for root device
	[  +0.107587] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.124893] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[  +0.422338] systemd-fstab-generator[1592]: Ignoring "noauto" option for root device
	[  +6.527193] kauditd_printk_skb: 271 callbacks suppressed
	[ +21.609982] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [172f3186b280] <==
	{"level":"warn","ts":"2024-08-19T17:57:06.731383Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502722350342,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T17:57:07.232131Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502722350342,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-19T17:57:07.494307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:07.494349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:07.494363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:07.494380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 4, index: 9137] sent MsgPreVote request to c22c1f54a3cc7858 at term 4"}
	{"level":"warn","ts":"2024-08-19T17:57:07.733136Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502722350342,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T17:57:08.234403Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502722350342,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-19T17:57:08.495804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:08.495944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:08.495987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:08.496021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 4, index: 9137] sent MsgPreVote request to c22c1f54a3cc7858 at term 4"}
	{"level":"warn","ts":"2024-08-19T17:57:08.735257Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502722350342,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T17:57:09.235868Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502722350342,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-19T17:57:09.493882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:09.493980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:09.494002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:09.494096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 4, index: 9137] sent MsgPreVote request to c22c1f54a3cc7858 at term 4"}
	{"level":"warn","ts":"2024-08-19T17:57:09.737082Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502722350342,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T17:57:10.237845Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502722350342,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-19T17:57:10.494107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:10.494165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:10.494179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-08-19T17:57:10.494192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 4, index: 9137] sent MsgPreVote request to c22c1f54a3cc7858 at term 4"}
	{"level":"warn","ts":"2024-08-19T17:57:10.738231Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740502722350342,"retry-timeout":"500ms"}
	
	
	==> etcd [1bac9a6bc683] <==
	{"level":"warn","ts":"2024-08-19T17:56:01.759132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.117528491s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-19T17:56:01.759144Z","caller":"traceutil/trace.go:171","msg":"trace[1686780788] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; }","duration":"1.117542879s","start":"2024-08-19T17:56:00.641598Z","end":"2024-08-19T17:56:01.759141Z","steps":["trace[1686780788] 'agreement among raft nodes before linearized reading'  (duration: 1.117529152s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:56:01.759156Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:56:00.641591Z","time spent":"1.117561952s","remote":"127.0.0.1:50700","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true "}
	2024/08/19 17:56:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-19T17:56:01.759251Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.077566389s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-19T17:56:01.759263Z","caller":"traceutil/trace.go:171","msg":"trace[1705065980] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; }","duration":"4.077581369s","start":"2024-08-19T17:55:57.681679Z","end":"2024-08-19T17:56:01.759260Z","steps":["trace[1705065980] 'agreement among raft nodes before linearized reading'  (duration: 4.07756658s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:56:01.759274Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:55:57.681640Z","time spent":"4.077630446s","remote":"127.0.0.1:50644","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	2024/08/19 17:56:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-19T17:56:01.760625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.182568047s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-19T17:56:01.760644Z","caller":"traceutil/trace.go:171","msg":"trace[671469639] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; }","duration":"4.182590035s","start":"2024-08-19T17:55:57.578049Z","end":"2024-08-19T17:56:01.760639Z","steps":["trace[671469639] 'agreement among raft nodes before linearized reading'  (duration: 4.182567412s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:56:01.760659Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T17:55:57.578012Z","time spent":"4.182641907s","remote":"127.0.0.1:50896","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":0,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true "}
	2024/08/19 17:56:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-19T17:56:01.801273Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T17:56:01.801299Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T17:56:01.805379Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-19T17:56:01.805977Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:56:01.805993Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:56:01.806008Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:56:01.806072Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:56:01.806120Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:56:01.806146Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:56:01.806154Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c22c1f54a3cc7858"}
	{"level":"info","ts":"2024-08-19T17:56:01.807780Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-19T17:56:01.807864Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-19T17:56:01.807873Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-431000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 17:57:11 up 1 min,  0 users,  load average: 0.65, 0.18, 0.06
	Linux ha-431000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [51e18fb0428a] <==
	I0819 17:55:14.914928       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:55:24.907734       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:55:24.907867       1 main.go:299] handling current node
	I0819 17:55:24.907909       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:55:24.907936       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:55:24.908309       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:55:24.908671       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:55:34.912415       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:55:34.912552       1 main.go:299] handling current node
	I0819 17:55:34.912573       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:55:34.912586       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:55:34.912670       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:55:34.912709       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:55:44.909624       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:55:44.909857       1 main.go:299] handling current node
	I0819 17:55:44.909966       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:55:44.910146       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:55:44.910471       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:55:44.910613       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	I0819 17:55:54.916675       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0819 17:55:54.916724       1 main.go:299] handling current node
	I0819 17:55:54.916738       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0819 17:55:54.916744       1 main.go:322] Node ha-431000-m02 has CIDR [10.244.1.0/24] 
	I0819 17:55:54.916877       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0819 17:55:54.916914       1 main.go:322] Node ha-431000-m04 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [3ff389834365] <==
	I0819 17:56:37.026389       1 options.go:228] external host was not specified, using 192.169.0.5
	I0819 17:56:37.032218       1 server.go:142] Version: v1.31.0
	I0819 17:56:37.032255       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:56:37.585979       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 17:56:37.599679       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 17:56:37.600931       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 17:56:37.601316       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 17:56:37.603149       1 instance.go:232] Using reconciler: lease
	W0819 17:56:57.582252       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 17:56:57.583566       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0819 17:56:57.603830       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [abdb28035668] <==
	I0819 17:57:09.954289       1 serving.go:386] Generated self-signed cert in-memory
	I0819 17:57:10.182362       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 17:57:10.182395       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:57:10.183584       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 17:57:10.183684       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 17:57:10.184023       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 17:57:10.184137       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [fa3467379487] <==
	I0819 17:56:37.100525       1 serving.go:386] Generated self-signed cert in-memory
	I0819 17:56:37.433483       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 17:56:37.433518       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:56:37.434851       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 17:56:37.435017       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 17:56:37.435457       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 17:56:37.435585       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 17:56:58.611146       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [5636b94096fe] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:48:24.349165       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:48:24.367746       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0819 17:48:24.368041       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:48:24.405399       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:48:24.405456       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:48:24.405475       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:48:24.408447       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:48:24.408968       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:48:24.409000       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:48:24.413438       1 config.go:197] "Starting service config controller"
	I0819 17:48:24.414215       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:48:24.414469       1 config.go:326] "Starting node config controller"
	I0819 17:48:24.414498       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:48:24.415820       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:48:24.415879       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:48:24.514730       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:48:24.514769       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:48:24.516651       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [47db9cfc528d] <==
	E0819 17:57:05.656801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:05.698175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:05.698269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:05.783907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:05.784088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:07.050121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:07.050176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:07.211153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:07.211311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:07.369300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:07.369361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:07.493160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:07.493334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:07.560600       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:07.560695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:07.575247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:07.575373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:07.756058       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:07.756156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:08.070802       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:08.070972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:08.421841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:08.422115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:57:09.401794       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0819 17:57:09.401972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-scheduler [dea4f29e7860] <==
	I0819 17:47:41.723714       1 serving.go:386] Generated self-signed cert in-memory
	W0819 17:47:52.174871       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0819 17:47:52.174919       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 17:47:52.174925       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 17:48:01.357387       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 17:48:01.359330       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:48:01.366155       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 17:48:01.366276       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 17:48:01.366447       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 17:48:01.366799       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 17:48:01.470208       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 17:56:01.639033       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0819 17:56:01.640806       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0819 17:56:01.653709       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0819 17:56:01.659357       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 17:56:57 ha-431000 kubelet[1599]: E0819 17:56:57.348330    1599 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-431000&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:56:57 ha-431000 kubelet[1599]: I0819 17:56:57.607618    1599 kubelet_node_status.go:72] "Attempting to register node" node="ha-431000"
	Aug 19 17:56:58 ha-431000 kubelet[1599]: I0819 17:56:58.315193    1599 scope.go:117] "RemoveContainer" containerID="a003b845ec4885ffe11a6df4ca12789beeca0e7563536a4b31ac13cba1d8326b"
	Aug 19 17:56:58 ha-431000 kubelet[1599]: I0819 17:56:58.315941    1599 scope.go:117] "RemoveContainer" containerID="3ff38983436539d7eabb93160f708961fc1fba49a35da5b1be83efe18870dfaa"
	Aug 19 17:56:58 ha-431000 kubelet[1599]: E0819 17:56:58.316077    1599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-431000_kube-system(4be26ba36a583cb5cf787c7b12260cd6)\"" pod="kube-system/kube-apiserver-ha-431000" podUID="4be26ba36a583cb5cf787c7b12260cd6"
	Aug 19 17:56:59 ha-431000 kubelet[1599]: I0819 17:56:59.338650    1599 scope.go:117] "RemoveContainer" containerID="f4bd8ba2e043785b470b103903d1ff6efdef2fb4a92e539003b5148f7f8db01c"
	Aug 19 17:56:59 ha-431000 kubelet[1599]: I0819 17:56:59.339499    1599 scope.go:117] "RemoveContainer" containerID="fa34673794871b77218e4d894f632eb15e6bd6210bc5a2d23f20c36bd1f2270d"
	Aug 19 17:56:59 ha-431000 kubelet[1599]: E0819 17:56:59.339584    1599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-431000_kube-system(dd42dc03bd6907443c2b90ba845e30ca)\"" pod="kube-system/kube-controller-manager-ha-431000" podUID="dd42dc03bd6907443c2b90ba845e30ca"
	Aug 19 17:56:59 ha-431000 kubelet[1599]: E0819 17:56:59.811444    1599 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-431000\" not found"
	Aug 19 17:57:00 ha-431000 kubelet[1599]: W0819 17:57:00.420607    1599 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 19 17:57:00 ha-431000 kubelet[1599]: E0819 17:57:00.420817    1599 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 19 17:57:00 ha-431000 kubelet[1599]: E0819 17:57:00.420905    1599 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-431000"
	Aug 19 17:57:00 ha-431000 kubelet[1599]: E0819 17:57:00.420984    1599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-431000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 19 17:57:00 ha-431000 kubelet[1599]: I0819 17:57:00.643970    1599 scope.go:117] "RemoveContainer" containerID="3ff38983436539d7eabb93160f708961fc1fba49a35da5b1be83efe18870dfaa"
	Aug 19 17:57:00 ha-431000 kubelet[1599]: E0819 17:57:00.644225    1599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-431000_kube-system(4be26ba36a583cb5cf787c7b12260cd6)\"" pod="kube-system/kube-apiserver-ha-431000" podUID="4be26ba36a583cb5cf787c7b12260cd6"
	Aug 19 17:57:01 ha-431000 kubelet[1599]: I0819 17:57:01.843191    1599 scope.go:117] "RemoveContainer" containerID="3ff38983436539d7eabb93160f708961fc1fba49a35da5b1be83efe18870dfaa"
	Aug 19 17:57:01 ha-431000 kubelet[1599]: E0819 17:57:01.843710    1599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-431000_kube-system(4be26ba36a583cb5cf787c7b12260cd6)\"" pod="kube-system/kube-apiserver-ha-431000" podUID="4be26ba36a583cb5cf787c7b12260cd6"
	Aug 19 17:57:02 ha-431000 kubelet[1599]: I0819 17:57:02.190601    1599 scope.go:117] "RemoveContainer" containerID="fa34673794871b77218e4d894f632eb15e6bd6210bc5a2d23f20c36bd1f2270d"
	Aug 19 17:57:02 ha-431000 kubelet[1599]: E0819 17:57:02.190984    1599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-431000_kube-system(dd42dc03bd6907443c2b90ba845e30ca)\"" pod="kube-system/kube-controller-manager-ha-431000" podUID="dd42dc03bd6907443c2b90ba845e30ca"
	Aug 19 17:57:03 ha-431000 kubelet[1599]: E0819 17:57:03.493082    1599 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-431000.17ed32e497065e83  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-431000,UID:ha-431000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-431000,},FirstTimestamp:2024-08-19 17:56:29.720477315 +0000 UTC m=+0.122204621,LastTimestamp:2024-08-19 17:56:29.720477315 +0000 UTC m=+0.122204621,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-431000,}"
	Aug 19 17:57:07 ha-431000 kubelet[1599]: I0819 17:57:07.422581    1599 kubelet_node_status.go:72] "Attempting to register node" node="ha-431000"
	Aug 19 17:57:08 ha-431000 kubelet[1599]: I0819 17:57:08.944053    1599 scope.go:117] "RemoveContainer" containerID="fa34673794871b77218e4d894f632eb15e6bd6210bc5a2d23f20c36bd1f2270d"
	Aug 19 17:57:09 ha-431000 kubelet[1599]: E0819 17:57:09.635928    1599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-431000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 19 17:57:09 ha-431000 kubelet[1599]: E0819 17:57:09.636053    1599 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-431000"
	Aug 19 17:57:09 ha-431000 kubelet[1599]: E0819 17:57:09.812508    1599 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-431000\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000: exit status 2 (154.338586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-431000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (62.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-135000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-135000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.68763198s)

                                                
                                                
-- stdout --
	* [mount-start-1-135000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-135000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-135000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2:f5:a8:77:b:81
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-135000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:3f:80:8f:1:29
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:3f:80:8f:1:29
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-135000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-135000 -n mount-start-1-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-135000 -n mount-start-1-135000: exit status 7 (78.730557ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 11:03:29.814560    7582 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0819 11:03:29.814582    7582 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-135000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.77s)

                                                
                                    
x
+
TestScheduledStopUnix (142.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-036000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-036000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.767876345s)

                                                
                                                
-- stdout --
	* [scheduled-stop-036000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-036000" primary control-plane node in "scheduled-stop-036000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-036000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:29:7b:ff:54:54
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-036000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:cc:de:be:33:60
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:cc:de:be:33:60
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-036000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-036000" primary control-plane node in "scheduled-stop-036000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-036000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:29:7b:ff:54:54
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-036000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:cc:de:be:33:60
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:cc:de:be:33:60
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-19 11:18:25.203672 -0700 PDT m=+5208.942639787
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-036000 -n scheduled-stop-036000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-036000 -n scheduled-stop-036000: exit status 7 (78.073916ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 11:18:25.280036    8398 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0819 11:18:25.280059    8398 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-036000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-036000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-036000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-036000: (5.236380672s)
--- FAIL: TestScheduledStopUnix (142.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (7201.726s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-579000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-579000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (51.065230602s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-579000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-579000: (2.364421262s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-579000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-579000 status --format={{.Host}}: exit status 7 (67.226539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-579000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
E0819 11:35:43.440107    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:37:06.511721    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:40:03.983769    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:40:12.127008    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:40:29.031664    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:40:43.438023    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:41:27.062882    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:45:03.983685    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:45:29.033578    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:45:43.440106    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-579000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit : (10m25.494674948s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-579000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-579000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-579000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (428.686437ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-579000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-579000
	    minikube start -p kubernetes-upgrade-579000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5790002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-579000 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-579000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
E0819 11:50:03.981611    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/skaffold-458000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:50:29.030754    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:50:43.438713    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (23m49s)
	TestKubernetesUpgrade (16m54s)
	TestNetworkPlugins (31m16s)

                                                
                                                
goroutine 2563 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0006e09c0, 0xc000b91bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000984348, {0x9fe40e0, 0x2a, 0x2a}, {0x561e6c5?, 0x7343c6e?, 0xa007780?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0007e6fa0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0007e6fa0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00069f400)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 143 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000896500, 0xc000058ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 141
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2560 [select, 16 minutes]:
os/exec.(*Cmd).watchCtx(0xc000222a80, 0xc001fb6000)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 657
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 160 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x8a7b690, 0xc000058ba0}, 0xc000507750, 0xc000b82f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x8a7b690, 0xc000058ba0}, 0x0?, 0xc000507750, 0xc000507798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x8a7b690?, 0xc000058ba0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5b98b25?, 0xc000d8cf00?, 0x8a71d40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 143
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 657 [syscall, 16 minutes]:
syscall.syscall6(0xc0014a5f80?, 0x1000000000010?, 0x10100000019?, 0x517e2728?, 0x90?, 0xaaa0108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000c06a40?, 0x555f0c5?, 0x90?, 0x89b09c0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x568f885?, 0xc000c06a74, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000d68120)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000222a80)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000222a80)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0006e0b60, 0xc000222a80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc0006e0b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:131 +0x576
testing.tRunner(0xc0006e0b60, 0x8a46fb0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 85 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 24
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 1208 [chan send, 85 minutes]:
os/exec.(*Cmd).watchCtx(0xc000bcf380, 0xc000981800)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1190
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2206 [syscall, 6 minutes]:
syscall.syscall6(0xc00152ff80?, 0x1000000000010?, 0x10000000019?, 0xaaa9a68?, 0x90?, 0xaaa0108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc00006f830?, 0x555f0c5?, 0x90?, 0x89b09c0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x568f885?, 0xc00006f864, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000d68a50)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000223680)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000223680)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014d49c0, 0xc000223680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0014d49c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:275 +0x1445
testing.tRunner(0xc0014d49c0, 0x8a47060)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 159 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0008964d0, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000712d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x8a956e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000896500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0000bc030, {0x8a55320, 0xc0006b71a0}, 0x1, 0xc000058ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000bc030, 0x3b9aca00, 0x0, 0x1, 0xc000058ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 143
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 161 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 160
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 939 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000896c50, 0x25)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00152ad80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x8a956e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000896c80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006cfbc0, {0x8a55320, 0xc00085b650}, 0x1, 0xc000058ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006cfbc0, 0x3b9aca00, 0x0, 0x1, 0xc000058ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 948
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 142 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x8a71d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 141
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2212 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001f41ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001f41ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001f41ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001f41ba0, 0xc00090b480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2121 [chan receive, 32 minutes]:
testing.(*T).Run(0xc001f40d00, {0x72e95fd?, 0x5940ee43160?}, 0xc000890d98)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001f40d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc001f40d00, 0x8a47098)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1221 [chan send, 85 minutes]:
os/exec.(*Cmd).watchCtx(0xc000bcfb00, 0xc00190a000)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 875
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 722 [IO wait, 111 minutes]:
internal/poll.runtime_pollWait(0x519f03a8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00090a400?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00090a400)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00090a400)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000cfe440)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000cfe440)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0002505a0, {0x8a6e160, 0xc000cfe440})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0002505a0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc001f40680?, 0xc001f40680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 719
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 964 [chan send, 87 minutes]:
os/exec.(*Cmd).watchCtx(0xc0014def00, 0xc000981380)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 963
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 947 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x8a71d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 884
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2210 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001f41860, 0xc000890d98)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2121
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2169 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x8a7b690, 0xc000058ba0}, 0xc001ce4f50, 0xc000b89f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x8a7b690, 0xc000058ba0}, 0x78?, 0xc001ce4f50, 0xc001ce4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x8a7b690?, 0xc000058ba0?}, 0xc001f411e0?, 0x5692540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001ce4fd0?, 0x56d8844?, 0x8a470c8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2185
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2205 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0014d4820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0014d4820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0014d4820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0014d4820, 0x8a470e8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 941 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 940
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1133 [chan send, 85 minutes]:
os/exec.(*Cmd).watchCtx(0xc00153ca80, 0xc001a7ff80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1132
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 940 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x8a7b690, 0xc000058ba0}, 0xc000509750, 0xc00070ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x8a7b690, 0xc000058ba0}, 0xa0?, 0xc000509750, 0xc000509798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x8a7b690?, 0xc000058ba0?}, 0x10000c0006e0b60?, 0x5692540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005097d0?, 0x56d8844?, 0xc0009805a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 948
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 1325 [select, 85 minutes]:
net/http.(*persistConn).writeLoop(0xc0014925a0)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1345
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 948 [chan receive, 87 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000896c80, 0xc000058ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 884
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1324 [select, 85 minutes]:
net/http.(*persistConn).readLoop(0xc0014925a0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1345
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2122 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001f41380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001f41380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc001f41380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc001f41380, 0x8a470a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2559 [IO wait, 6 minutes]:
internal/poll.runtime_pollWait(0x519effc8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d8c180?, 0xc0008e7c80?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d8c180, {0xc0008e7c80, 0x380, 0x380})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001d32048, {0xc0008e7c80?, 0xc0014c1880?, 0x80?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014a4060, {0x8a53ce8, 0xc001ce6010})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x8a53e28, 0xc0014a4060}, {0x8a53ce8, 0xc001ce6010}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000582678?, {0x8a53e28, 0xc0014a4060})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000582738?, {0x8a53e28?, 0xc0014a4060?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x8a53e28, 0xc0014a4060}, {0x8a53da8, 0xc001d32048}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001fb6000?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 657
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2168 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001910290, 0x17)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000b85d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x8a956e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019102c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e74000, {0x8a55320, 0xc00003c240}, 0x1, 0xc000058ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001e74000, 0x3b9aca00, 0x0, 0x1, 0xc000058ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2185
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2123 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001f41520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001f41520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc001f41520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc001f41520, 0x8a470b0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2202 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0014d4340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0014d4340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0014d4340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0014d4340, 0x8a470e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2170 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2169
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2184 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x8a71d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2155
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2213 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001f41d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001f41d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001f41d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001f41d40, 0xc00090b500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2185 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019102c0, 0xc000058ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2155
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2211 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001f41a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001f41a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001f41a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001f41a00, 0xc00090b400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2214 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006e0000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006e0000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006e0000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0006e0000, 0xc00090b580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2215 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006e1380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006e1380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006e1380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0006e1380, 0xc00090b600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2216 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006e1ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006e1ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006e1ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0006e1ba0, 0xc00090b680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2217 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006e1d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006e1d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006e1d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0006e1d40, 0xc00090b700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2218 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000922b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000922b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000922b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000922b60, 0xc00090b780)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2219 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0007b3cc0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000922ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000922ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000922ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000922ea0, 0xc00090b800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2558 [IO wait, 6 minutes]:
internal/poll.runtime_pollWait(0x519f0690, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d8c0c0?, 0xc00074c27b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d8c0c0, {0xc00074c27b, 0x585, 0x585})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001d32030, {0xc00074c27b?, 0xc0014c1500?, 0x23b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014a4030, {0x8a53ce8, 0xc001ce6008})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x8a53e28, 0xc0014a4030}, {0x8a53ce8, 0xc001ce6008}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001ce0e78?, {0x8a53e28, 0xc0014a4030})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001ce0f38?, {0x8a53e28?, 0xc0014a4030?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x8a53e28, 0xc0014a4030}, {0x8a53da8, 0xc001d32030}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001ecae40?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 657
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2594 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc000223680, 0xc001eca2a0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2206
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2593 [IO wait, 6 minutes]:
internal/poll.runtime_pollWait(0x519f0880, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00077cd80?, 0xc000856a20?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00077cd80, {0xc000856a20, 0x15e0, 0x15e0})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001ce6170, {0xc000856a20?, 0x9?, 0x2000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00152e630, {0x8a53ce8, 0xc001d32188})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x8a53e28, 0xc00152e630}, {0x8a53ce8, 0xc001d32188}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x8a53e28, 0xc00152e630})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x5556a3e?, {0x8a53e28?, 0xc00152e630?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x8a53e28, 0xc00152e630}, {0x8a53da8, 0xc001ce6170}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001d08780?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2206
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2592 [IO wait, 6 minutes]:
internal/poll.runtime_pollWait(0x519f00c0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00077cb40?, 0xc00074da04?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00077cb40, {0xc00074da04, 0x5fc, 0x5fc})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001ce6158, {0xc00074da04?, 0xc0006848c0?, 0x204?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00152e600, {0x8a53ce8, 0xc001d32178})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x8a53e28, 0xc00152e600}, {0x8a53ce8, 0xc001d32178}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000094e78?, {0x8a53e28, 0xc00152e600})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000094f38?, {0x8a53e28?, 0xc00152e600?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x8a53e28, 0xc00152e600}, {0x8a53da8, 0xc001ce6158}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000059b00?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2206
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                    

Test pass (157/201)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.96
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.0/json-events 10.11
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.29
18 TestDownloadOnly/v1.31.0/DeleteAll 0.24
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.21
21 TestBinaryMirror 0.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
27 TestAddons/Setup 204.62
29 TestAddons/serial/Volcano 39.38
31 TestAddons/serial/GCPAuth/Namespaces 0.1
33 TestAddons/parallel/Registry 14.87
34 TestAddons/parallel/Ingress 20.21
35 TestAddons/parallel/InspektorGadget 10.51
36 TestAddons/parallel/MetricsServer 5.5
37 TestAddons/parallel/HelmTiller 10.1
39 TestAddons/parallel/CSI 49.07
40 TestAddons/parallel/Headlamp 19.36
41 TestAddons/parallel/CloudSpanner 5.39
42 TestAddons/parallel/LocalPath 53.28
43 TestAddons/parallel/NvidiaDevicePlugin 5.38
44 TestAddons/parallel/Yakd 10.47
45 TestAddons/StoppedEnableDisable 5.93
53 TestHyperKitDriverInstallOrUpdate 8.9
56 TestErrorSpam/setup 35.24
57 TestErrorSpam/start 1.7
58 TestErrorSpam/status 0.5
59 TestErrorSpam/pause 1.38
60 TestErrorSpam/unpause 1.35
61 TestErrorSpam/stop 155.83
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 76.09
66 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 360.61
73 TestFunctional/serial/CacheCmd/cache/add_local 60.32
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
75 TestFunctional/serial/CacheCmd/cache/list 0.08
78 TestFunctional/serial/CacheCmd/cache/delete 0.17
81 TestFunctional/serial/ExtraConfig 87.41
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 2.95
84 TestFunctional/serial/LogsFileCmd 3.08
85 TestFunctional/serial/InvalidService 4
87 TestFunctional/parallel/ConfigCmd 0.46
88 TestFunctional/parallel/DashboardCmd 12.67
89 TestFunctional/parallel/DryRun 1.48
90 TestFunctional/parallel/InternationalLanguage 0.61
91 TestFunctional/parallel/StatusCmd 0.59
95 TestFunctional/parallel/ServiceCmdConnect 16.52
96 TestFunctional/parallel/AddonsCmd 0.22
97 TestFunctional/parallel/PersistentVolumeClaim 29.63
99 TestFunctional/parallel/SSHCmd 0.3
100 TestFunctional/parallel/CpCmd 1.06
101 TestFunctional/parallel/MySQL 29.02
102 TestFunctional/parallel/FileSync 0.19
103 TestFunctional/parallel/CertSync 1.07
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.15
111 TestFunctional/parallel/License 0.48
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.18
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.27
125 TestFunctional/parallel/ProfileCmd/profile_list 0.27
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
127 TestFunctional/parallel/MountCmd/any-port 7.32
128 TestFunctional/parallel/ServiceCmd/List 0.4
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.39
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
131 TestFunctional/parallel/ServiceCmd/Format 0.36
132 TestFunctional/parallel/ServiceCmd/URL 0.31
133 TestFunctional/parallel/MountCmd/specific-port 1.82
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.4
135 TestFunctional/parallel/Version/short 0.12
136 TestFunctional/parallel/Version/components 0.64
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.17
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
141 TestFunctional/parallel/ImageCommands/ImageBuild 4.14
142 TestFunctional/parallel/ImageCommands/Setup 1.95
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.1
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.68
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.44
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
150 TestFunctional/parallel/DockerEnv/bash 0.63
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/NodeLabels 0.05
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.32
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.43
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.31
178 TestImageBuild/serial/Setup 40.53
179 TestImageBuild/serial/NormalBuild 1.59
180 TestImageBuild/serial/BuildWithBuildArg 0.75
181 TestImageBuild/serial/BuildWithDockerIgnore 0.56
182 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.79
186 TestJSONOutput/start/Command 77.06
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Command 0.49
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Command 0.46
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 8.33
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 0.58
214 TestMainNoArgs 0.08
215 TestMinikubeProfile 89.37
221 TestMultiNode/serial/FreshStart2Nodes 109.09
222 TestMultiNode/serial/DeployApp2Nodes 4.48
223 TestMultiNode/serial/PingHostFrom2Pods 0.87
224 TestMultiNode/serial/AddNode 45.52
225 TestMultiNode/serial/MultiNodeLabels 0.05
226 TestMultiNode/serial/ProfileList 0.18
227 TestMultiNode/serial/CopyFile 5.25
228 TestMultiNode/serial/StopNode 2.84
229 TestMultiNode/serial/StartAfterStop 41.44
230 TestMultiNode/serial/RestartKeepsNodes 163.95
231 TestMultiNode/serial/DeleteNode 3.27
232 TestMultiNode/serial/StopMultiNode 16.8
233 TestMultiNode/serial/RestartMultiNode 99.27
234 TestMultiNode/serial/ValidateNameConflict 45
238 TestPreload 204.83
241 TestSkaffold 109.76
244 TestRunningBinaryUpgrade 98.91
259 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.06
260 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.67
x
+
TestDownloadOnly/v1.20.0/json-events (14.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-384000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-384000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (14.961965527s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-384000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-384000: exit status 85 (290.651568ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-384000 | jenkins | v1.33.1 | 19 Aug 24 09:51 PDT |          |
	|         | -p download-only-384000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 09:51:36
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 09:51:36.092354    2178 out.go:345] Setting OutFile to fd 1 ...
	I0819 09:51:36.092644    2178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 09:51:36.092649    2178 out.go:358] Setting ErrFile to fd 2...
	I0819 09:51:36.092653    2178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 09:51:36.092828    2178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	W0819 09:51:36.092931    2178 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19478-1622/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19478-1622/.minikube/config/config.json: no such file or directory
	I0819 09:51:36.094828    2178 out.go:352] Setting JSON to true
	I0819 09:51:36.118189    2178 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1266,"bootTime":1724085030,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 09:51:36.118282    2178 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 09:51:36.139991    2178 out.go:97] [download-only-384000] minikube v1.33.1 on Darwin 14.6.1
	I0819 09:51:36.140104    2178 notify.go:220] Checking for updates...
	W0819 09:51:36.140117    2178 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 09:51:36.160569    2178 out.go:169] MINIKUBE_LOCATION=19478
	I0819 09:51:36.181711    2178 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 09:51:36.202795    2178 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 09:51:36.223713    2178 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 09:51:36.244770    2178 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	W0819 09:51:36.286559    2178 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 09:51:36.286835    2178 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 09:51:36.334711    2178 out.go:97] Using the hyperkit driver based on user configuration
	I0819 09:51:36.334747    2178 start.go:297] selected driver: hyperkit
	I0819 09:51:36.334755    2178 start.go:901] validating driver "hyperkit" against <nil>
	I0819 09:51:36.334869    2178 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 09:51:36.335089    2178 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 09:51:36.730887    2178 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 09:51:36.736110    2178 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 09:51:36.736131    2178 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 09:51:36.736160    2178 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 09:51:36.740944    2178 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0819 09:51:36.741386    2178 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 09:51:36.741418    2178 cni.go:84] Creating CNI manager for ""
	I0819 09:51:36.741431    2178 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 09:51:36.741512    2178 start.go:340] cluster config:
	{Name:download-only-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 09:51:36.741757    2178 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 09:51:36.763146    2178 out.go:97] Downloading VM boot image ...
	I0819 09:51:36.763233    2178 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 09:51:42.458065    2178 out.go:97] Starting "download-only-384000" primary control-plane node in "download-only-384000" cluster
	I0819 09:51:42.458121    2178 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 09:51:42.518993    2178 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0819 09:51:42.519047    2178 cache.go:56] Caching tarball of preloaded images
	I0819 09:51:42.519429    2178 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 09:51:42.541125    2178 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 09:51:42.541153    2178 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0819 09:51:42.632507    2178 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-384000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-384000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-384000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (10.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-388000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-388000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit : (10.109521258s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (10.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-388000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-388000: exit status 85 (294.340119ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-384000 | jenkins | v1.33.1 | 19 Aug 24 09:51 PDT |                     |
	|         | -p download-only-384000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 09:51 PDT | 19 Aug 24 09:51 PDT |
	| delete  | -p download-only-384000        | download-only-384000 | jenkins | v1.33.1 | 19 Aug 24 09:51 PDT | 19 Aug 24 09:51 PDT |
	| start   | -o=json --download-only        | download-only-388000 | jenkins | v1.33.1 | 19 Aug 24 09:51 PDT |                     |
	|         | -p download-only-388000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 09:51:51
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 09:51:51.792774    2202 out.go:345] Setting OutFile to fd 1 ...
	I0819 09:51:51.793028    2202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 09:51:51.793033    2202 out.go:358] Setting ErrFile to fd 2...
	I0819 09:51:51.793037    2202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 09:51:51.793198    2202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 09:51:51.794601    2202 out.go:352] Setting JSON to true
	I0819 09:51:51.817060    2202 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1281,"bootTime":1724085030,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 09:51:51.817145    2202 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 09:51:51.839114    2202 out.go:97] [download-only-388000] minikube v1.33.1 on Darwin 14.6.1
	I0819 09:51:51.839318    2202 notify.go:220] Checking for updates...
	I0819 09:51:51.860508    2202 out.go:169] MINIKUBE_LOCATION=19478
	I0819 09:51:51.881632    2202 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 09:51:51.902630    2202 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 09:51:51.923383    2202 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 09:51:51.944827    2202 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	W0819 09:51:51.986343    2202 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 09:51:51.986810    2202 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 09:51:52.016578    2202 out.go:97] Using the hyperkit driver based on user configuration
	I0819 09:51:52.016636    2202 start.go:297] selected driver: hyperkit
	I0819 09:51:52.016649    2202 start.go:901] validating driver "hyperkit" against <nil>
	I0819 09:51:52.016849    2202 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 09:51:52.017111    2202 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19478-1622/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0819 09:51:52.026563    2202 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0819 09:51:52.030391    2202 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 09:51:52.030416    2202 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0819 09:51:52.030447    2202 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 09:51:52.033079    2202 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0819 09:51:52.033225    2202 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 09:51:52.033256    2202 cni.go:84] Creating CNI manager for ""
	I0819 09:51:52.033273    2202 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 09:51:52.033283    2202 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 09:51:52.033348    2202 start.go:340] cluster config:
	{Name:download-only-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-388000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 09:51:52.033436    2202 iso.go:125] acquiring lock: {Name:mk76e9a270f5290b5369d70b18bd536ac6e95824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 09:51:52.054659    2202 out.go:97] Starting "download-only-388000" primary control-plane node in "download-only-388000" cluster
	I0819 09:51:52.054693    2202 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 09:51:52.115556    2202 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 09:51:52.115590    2202 cache.go:56] Caching tarball of preloaded images
	I0819 09:51:52.116033    2202 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 09:51:52.137498    2202 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 09:51:52.137524    2202 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0819 09:51:52.228016    2202 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0819 09:51:58.435337    2202 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0819 09:51:58.435540    2202 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0819 09:51:58.899362    2202 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 09:51:58.899595    2202 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/download-only-388000/config.json ...
	I0819 09:51:58.899620    2202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/download-only-388000/config.json: {Name:mk0f83f4d0ddf71f4e7d70699cc03645982e41c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 09:51:58.899954    2202 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 09:51:58.900223    2202 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19478-1622/.minikube/cache/darwin/amd64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-388000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-388000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-388000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.96s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-006000 --alsologtostderr --binary-mirror http://127.0.0.1:49640 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-006000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-006000
--- PASS: TestBinaryMirror (0.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-080000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-080000: exit status 85 (187.970391ms)

                                                
                                                
-- stdout --
	* Profile "addons-080000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-080000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-080000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-080000: exit status 85 (208.229555ms)

                                                
                                                
-- stdout --
	* Profile "addons-080000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-080000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (204.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-080000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-080000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m24.62456995s)
--- PASS: TestAddons/Setup (204.62s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.38s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 10.992419ms
addons_test.go:897: volcano-scheduler stabilized in 11.211099ms
addons_test.go:905: volcano-admission stabilized in 11.255655ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-wbpms" [0063d4ef-25ce-48f6-bd31-711f0d08e6c8] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003484038s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-lnzfk" [fb3aa6b2-8b0b-4f3f-bf85-97351a8b981e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005158084s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-r5l94" [987674a1-cde1-4e91-85ab-9c4a2a12c681] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00511377s
addons_test.go:932: (dbg) Run:  kubectl --context addons-080000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-080000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-080000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d9e6779d-a7b3-4347-a2fe-84a05fe8ce90] Pending
helpers_test.go:344: "test-job-nginx-0" [d9e6779d-a7b3-4347-a2fe-84a05fe8ce90] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d9e6779d-a7b3-4347-a2fe-84a05fe8ce90] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004347672s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-080000 addons disable volcano --alsologtostderr -v=1: (10.082852759s)
--- PASS: TestAddons/serial/Volcano (39.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-080000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-080000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.388724ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-cwxvw" [9a40b6ed-ea64-481b-b4a8-8e769244871b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00474305s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-277vp" [61645697-58e8-4e05-8cf1-54f8cff36c0b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006101099s
addons_test.go:342: (dbg) Run:  kubectl --context addons-080000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-080000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-080000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.232215749s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 ip
2024/08/19 09:56:39 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.87s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-080000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-080000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-080000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [45368962-9b5b-42d7-8c8b-12b2ea770dfb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [45368962-9b5b-42d7-8c8b-12b2ea770dfb] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.002234127s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-080000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-080000 addons disable ingress --alsologtostderr -v=1: (7.457867674s)
--- PASS: TestAddons/parallel/Ingress (20.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.51s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hjx4j" [6c9d1885-7f00-469c-ac3b-5ce910f1e5a4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00567719s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-080000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-080000: (5.501952638s)
--- PASS: TestAddons/parallel/InspektorGadget (10.51s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.589711ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-jvkh2" [ce761b79-400f-47ed-b048-b5da1baca5b3] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004525962s
addons_test.go:417: (dbg) Run:  kubectl --context addons-080000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.50s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.1s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.170725ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-ngtk5" [651bf32c-7763-4e0c-acd4-73aa9d497f94] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004740925s
addons_test.go:475: (dbg) Run:  kubectl --context addons-080000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-080000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.685184732s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.10s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.438144ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-080000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-080000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [77f2f1a7-6eaa-4e32-a6ea-907f7570faaf] Pending
helpers_test.go:344: "task-pv-pod" [77f2f1a7-6eaa-4e32-a6ea-907f7570faaf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [77f2f1a7-6eaa-4e32-a6ea-907f7570faaf] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004355709s
addons_test.go:590: (dbg) Run:  kubectl --context addons-080000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-080000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-080000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-080000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-080000 delete pod task-pv-pod: (1.197117334s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-080000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-080000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-080000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e68f1748-3956-4977-88ed-4253fe4922e2] Pending
helpers_test.go:344: "task-pv-pod-restore" [e68f1748-3956-4977-88ed-4253fe4922e2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e68f1748-3956-4977-88ed-4253fe4922e2] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.021470917s
addons_test.go:632: (dbg) Run:  kubectl --context addons-080000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-080000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-080000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-080000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.421942745s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.07s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-080000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-6vn8n" [a6c1370f-5761-4291-8d91-8966ff1dc49e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-6vn8n" [a6c1370f-5761-4291-8d91-8966ff1dc49e] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005922735s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-080000 addons disable headlamp --alsologtostderr -v=1: (5.463816801s)
--- PASS: TestAddons/parallel/Headlamp (19.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-hdppj" [19b5be8e-5465-4010-8f67-134b8256e7bb] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00412396s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-080000
--- PASS: TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.28s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-080000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-080000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-080000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b93328e7-5d9b-4ebd-8587-66c5d4e3ffd6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b93328e7-5d9b-4ebd-8587-66c5d4e3ffd6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b93328e7-5d9b-4ebd-8587-66c5d4e3ffd6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.006169906s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-080000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 ssh "cat /opt/local-path-provisioner/pvc-41b6aee4-3205-46be-9630-deb926d6771d_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-080000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-080000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-080000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.642062131s)
--- PASS: TestAddons/parallel/LocalPath (53.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ck6lt" [554c5553-ef01-4845-aa37-02d02d4b16d7] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005118676s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-080000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.38s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-kszt4" [edff94ae-b514-47d2-a28d-da78cf0d4bf0] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004186542s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-080000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-080000 addons disable yakd --alsologtostderr -v=1: (5.46905068s)
--- PASS: TestAddons/parallel/Yakd (10.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-080000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-080000: (5.384318331s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-080000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-080000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-080000
--- PASS: TestAddons/StoppedEnableDisable (5.93s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.9s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
E0819 11:20:43.476033    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestHyperKitDriverInstallOrUpdate (8.90s)

                                                
                                    
x
+
TestErrorSpam/setup (35.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-492000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-492000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 --driver=hyperkit : (35.243825171s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (35.24s)

                                                
                                    
x
+
TestErrorSpam/start (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 start --dry-run
--- PASS: TestErrorSpam/start (1.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.5s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 status
--- PASS: TestErrorSpam/status (0.50s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 unpause
--- PASS: TestErrorSpam/unpause (1.35s)

                                                
                                    
x
+
TestErrorSpam/stop (155.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 stop: (5.381307167s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 stop: (1m15.217252268s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 stop
E0819 10:00:28.884274    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:28.893559    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:28.906156    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:28.929805    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:28.972027    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:29.055590    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:29.219234    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:29.542832    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:30.184276    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:31.466392    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:34.028020    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:39.150819    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:00:49.394503    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 10:01:09.876443    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-492000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-492000 stop: (1m15.226704066s)
--- PASS: TestErrorSpam/stop (155.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19478-1622/.minikube/files/etc/test/nested/copy/2174/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-622000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0819 10:01:50.838862    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-622000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m16.089899158s)
--- PASS: TestFunctional/serial/StartWithProxy (76.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (360.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 cache add registry.k8s.io/pause:3.1: (1m59.85559554s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 cache add registry.k8s.io/pause:3.3
E0819 10:10:28.883430    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 cache add registry.k8s.io/pause:3.3: (2m0.383326634s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 cache add registry.k8s.io/pause:latest: (2m0.370904546s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (360.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (60.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local1022277514/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 cache add minikube-local-cache-test:functional-622000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 cache add minikube-local-cache-test:functional-622000: (59.881683613s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 cache delete minikube-local-cache-test:functional-622000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-622000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (60.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (87.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-622000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0819 10:25:29.005975    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-622000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m27.4056894s)
functional_test.go:761: restart took 1m27.405828415s for "functional-622000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (87.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-622000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 logs: (2.95242011s)
--- PASS: TestFunctional/serial/LogsCmd (2.95s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd986206634/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd986206634/001/logs.txt: (3.076458208s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-622000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-622000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-622000: exit status 115 (309.631203ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:31634 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-622000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 config get cpus: exit status 14 (55.83812ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 config get cpus: exit status 14 (55.378909ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-622000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-622000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4491: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-622000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-622000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (854.978795ms)

                                                
                                                
-- stdout --
	* [functional-622000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:26:20.947858    4417 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:26:20.948117    4417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:26:20.948123    4417 out.go:358] Setting ErrFile to fd 2...
	I0819 10:26:20.948126    4417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:26:20.948289    4417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:26:20.949640    4417 out.go:352] Setting JSON to false
	I0819 10:26:20.972064    4417 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3350,"bootTime":1724085030,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:26:20.972165    4417 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:26:20.994082    4417 out.go:177] * [functional-622000] minikube v1.33.1 on Darwin 14.6.1
	I0819 10:26:21.035851    4417 notify.go:220] Checking for updates...
	I0819 10:26:21.056634    4417 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:26:21.135699    4417 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:26:21.178661    4417 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:26:21.252601    4417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:26:21.332607    4417 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:26:21.353661    4417 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:26:21.392403    4417 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:26:21.393060    4417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:26:21.393145    4417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:26:21.402780    4417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50767
	I0819 10:26:21.403154    4417 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:26:21.403568    4417 main.go:141] libmachine: Using API Version  1
	I0819 10:26:21.403580    4417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:26:21.403826    4417 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:26:21.403939    4417 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:26:21.404135    4417 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:26:21.404388    4417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:26:21.404413    4417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:26:21.412981    4417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50769
	I0819 10:26:21.413341    4417 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:26:21.413768    4417 main.go:141] libmachine: Using API Version  1
	I0819 10:26:21.413792    4417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:26:21.413983    4417 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:26:21.414100    4417 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:26:21.478779    4417 out.go:177] * Using the hyperkit driver based on existing profile
	I0819 10:26:21.557856    4417 start.go:297] selected driver: hyperkit
	I0819 10:26:21.557879    4417 start.go:901] validating driver "hyperkit" against &{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:26:21.558130    4417 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:26:21.621786    4417 out.go:201] 
	W0819 10:26:21.663685    4417 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 10:26:21.705782    4417 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-622000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-622000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-622000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (610.333404ms)

                                                
                                                
-- stdout --
	* [functional-622000] minikube v1.33.1 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 10:26:20.331488    4400 out.go:345] Setting OutFile to fd 1 ...
	I0819 10:26:20.331729    4400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:26:20.331735    4400 out.go:358] Setting ErrFile to fd 2...
	I0819 10:26:20.331738    4400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 10:26:20.331932    4400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 10:26:20.333447    4400 out.go:352] Setting JSON to false
	I0819 10:26:20.356834    4400 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3350,"bootTime":1724085030,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0819 10:26:20.356918    4400 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 10:26:20.377981    4400 out.go:177] * [functional-622000] minikube v1.33.1 sur Darwin 14.6.1
	I0819 10:26:20.420127    4400 notify.go:220] Checking for updates...
	I0819 10:26:20.441202    4400 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 10:26:20.483048    4400 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	I0819 10:26:20.525286    4400 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0819 10:26:20.567008    4400 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 10:26:20.609093    4400 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	I0819 10:26:20.630082    4400 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 10:26:20.651828    4400 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 10:26:20.652497    4400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:26:20.652576    4400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:26:20.662640    4400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50744
	I0819 10:26:20.663048    4400 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:26:20.663481    4400 main.go:141] libmachine: Using API Version  1
	I0819 10:26:20.663497    4400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:26:20.663766    4400 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:26:20.663884    4400 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:26:20.664097    4400 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 10:26:20.664347    4400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 10:26:20.664387    4400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 10:26:20.673124    4400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50748
	I0819 10:26:20.673502    4400 main.go:141] libmachine: () Calling .GetVersion
	I0819 10:26:20.673877    4400 main.go:141] libmachine: Using API Version  1
	I0819 10:26:20.673904    4400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 10:26:20.674129    4400 main.go:141] libmachine: () Calling .GetMachineName
	I0819 10:26:20.674249    4400 main.go:141] libmachine: (functional-622000) Calling .DriverName
	I0819 10:26:20.703176    4400 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0819 10:26:20.763097    4400 start.go:297] selected driver: hyperkit
	I0819 10:26:20.763120    4400 start.go:901] validating driver "hyperkit" against &{Name:functional-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 10:26:20.763290    4400 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 10:26:20.809083    4400 out.go:201] 
	W0819 10:26:20.830147    4400 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 10:26:20.851100    4400 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (16.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-622000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-622000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-dfxch" [4fc5b308-ebfa-4899-bb60-fbec0e411181] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-dfxch" [4fc5b308-ebfa-4899-bb60-fbec0e411181] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 16.00703832s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:30362
functional_test.go:1675: http://192.169.0.4:30362: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-dfxch

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:30362
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (16.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2587a5ae-7eab-4e52-ac9d-259f7fb8ddc8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009453569s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-622000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-622000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-622000 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-622000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-622000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [701fd9ae-ac36-49d2-a274-6d832bfb4d46] Pending
helpers_test.go:344: "sp-pod" [701fd9ae-ac36-49d2-a274-6d832bfb4d46] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [701fd9ae-ac36-49d2-a274-6d832bfb4d46] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.010918928s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-622000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-622000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-622000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ede2d1c5-8288-41e7-9bec-3cc4f7af8f1f] Pending
helpers_test.go:344: "sp-pod" [ede2d1c5-8288-41e7-9bec-3cc4f7af8f1f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ede2d1c5-8288-41e7-9bec-3cc4f7af8f1f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.013732274s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-622000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh -n functional-622000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 cp functional-622000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd3719439496/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh -n functional-622000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh -n functional-622000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-622000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-7z762" [dcf1b562-44fa-4fa9-b77c-4f6c24c33cf7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-7z762" [dcf1b562-44fa-4fa9-b77c-4f6c24c33cf7] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.004391903s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-622000 exec mysql-6cdb49bbb-7z762 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-622000 exec mysql-6cdb49bbb-7z762 -- mysql -ppassword -e "show databases;": exit status 1 (136.37594ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-622000 exec mysql-6cdb49bbb-7z762 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-622000 exec mysql-6cdb49bbb-7z762 -- mysql -ppassword -e "show databases;": exit status 1 (127.451328ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-622000 exec mysql-6cdb49bbb-7z762 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2174/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "sudo cat /etc/test/nested/copy/2174/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2174.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "sudo cat /etc/ssl/certs/2174.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2174.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "sudo cat /usr/share/ca-certificates/2174.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/21742.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "sudo cat /etc/ssl/certs/21742.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/21742.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "sudo cat /usr/share/ca-certificates/21742.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-622000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 ssh "sudo systemctl is-active crio": exit status 1 (152.828075ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-622000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-622000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-622000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-622000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4183: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-622000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-622000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f5c7b9a9-f7fa-4be7-a1d0-9f00802ae32c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f5c7b9a9-f7fa-4be7-a1d0-9f00802ae32c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005550812s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-622000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.90.78 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-622000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-622000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-622000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-w8wpn" [cd3a78c0-a050-4e8a-ab0c-4533579417f5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-w8wpn" [cd3a78c0-a050-4e8a-ab0c-4533579417f5] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.006599422s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "193.097593ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "77.696267ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "192.84471ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "78.769934ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port568242628/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724088373859265000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port568242628/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724088373859265000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port568242628/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724088373859265000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port568242628/001/test-1724088373859265000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (156.73448ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 17:26 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 17:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 17:26 test-1724088373859265000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh cat /mount-9p/test-1724088373859265000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-622000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3d5e052c-e2de-42a9-b6a8-8c6e66bb8548] Pending
helpers_test.go:344: "busybox-mount" [3d5e052c-e2de-42a9-b6a8-8c6e66bb8548] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3d5e052c-e2de-42a9-b6a8-8c6e66bb8548] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3d5e052c-e2de-42a9-b6a8-8c6e66bb8548] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007465392s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-622000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port568242628/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 service list -o json
functional_test.go:1494: Took "389.072854ms" to run "out/minikube-darwin-amd64 -p functional-622000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:31441
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:31441
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1224494972/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (172.748091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1224494972/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 ssh "sudo umount -f /mount-9p": exit status 1 (147.42572ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-622000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1224494972/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2039553178/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2039553178/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2039553178/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T" /mount1: exit status 1 (166.371853ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T" /mount1: exit status 1 (254.883221ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-622000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2039553178/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2039553178/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-622000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2039553178/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-622000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:3.10
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-622000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-622000 image ls --format short --alsologtostderr:
I0819 10:26:35.865460    4707 out.go:345] Setting OutFile to fd 1 ...
I0819 10:26:35.865753    4707 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 10:26:35.865758    4707 out.go:358] Setting ErrFile to fd 2...
I0819 10:26:35.865762    4707 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 10:26:35.865945    4707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
I0819 10:26:35.866559    4707 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 10:26:35.866655    4707 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 10:26:35.867039    4707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0819 10:26:35.867077    4707 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0819 10:26:35.875577    4707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51051
I0819 10:26:35.876062    4707 main.go:141] libmachine: () Calling .GetVersion
I0819 10:26:35.876500    4707 main.go:141] libmachine: Using API Version  1
I0819 10:26:35.876510    4707 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 10:26:35.876771    4707 main.go:141] libmachine: () Calling .GetMachineName
I0819 10:26:35.876895    4707 main.go:141] libmachine: (functional-622000) Calling .GetState
I0819 10:26:35.876992    4707 main.go:141] libmachine: (functional-622000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0819 10:26:35.877070    4707 main.go:141] libmachine: (functional-622000) DBG | hyperkit pid from json: 3102
I0819 10:26:35.878355    4707 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0819 10:26:35.878379    4707 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0819 10:26:35.887128    4707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51053
I0819 10:26:35.887508    4707 main.go:141] libmachine: () Calling .GetVersion
I0819 10:26:35.887875    4707 main.go:141] libmachine: Using API Version  1
I0819 10:26:35.887888    4707 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 10:26:35.888144    4707 main.go:141] libmachine: () Calling .GetMachineName
I0819 10:26:35.888262    4707 main.go:141] libmachine: (functional-622000) Calling .DriverName
I0819 10:26:35.888436    4707 ssh_runner.go:195] Run: systemctl --version
I0819 10:26:35.888454    4707 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
I0819 10:26:35.888538    4707 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
I0819 10:26:35.888618    4707 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
I0819 10:26:35.888720    4707 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
I0819 10:26:35.888813    4707 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
I0819 10:26:35.927470    4707 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0819 10:26:35.954109    4707 main.go:141] libmachine: Making call to close driver server
I0819 10:26:35.954119    4707 main.go:141] libmachine: (functional-622000) Calling .Close
I0819 10:26:35.954295    4707 main.go:141] libmachine: Successfully made call to close driver server
I0819 10:26:35.954308    4707 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 10:26:35.954380    4707 main.go:141] libmachine: Making call to close driver server
I0819 10:26:35.954376    4707 main.go:141] libmachine: (functional-622000) DBG | Closing plugin on server side
I0819 10:26:35.954388    4707 main.go:141] libmachine: (functional-622000) Calling .Close
I0819 10:26:35.954544    4707 main.go:141] libmachine: (functional-622000) DBG | Closing plugin on server side
I0819 10:26:35.954563    4707 main.go:141] libmachine: Successfully made call to close driver server
I0819 10:26:35.954576    4707 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-622000 image ls --format table --alsologtostderr:
|-----------------------------------------|-------------------|---------------|--------|
|                  Image                  |        Tag        |   Image ID    |  Size  |
|-----------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| docker.io/library/nginx                 | alpine            | 0f0eda053dc5c | 43.3MB |
| registry.k8s.io/pause                   | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kubernetesui/dashboard        | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                 | latest            | 5ef79149e0ec8 | 188MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/echoserver              | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-scheduler          | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/kube-apiserver          | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| registry.k8s.io/etcd                    | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kicbase/echo-server           | functional-622000 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper  | <none>            | 115053965e86b | 43.8MB |
|-----------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-622000 image ls --format table --alsologtostderr:
I0819 10:26:36.507936    4719 out.go:345] Setting OutFile to fd 1 ...
I0819 10:26:36.508126    4719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 10:26:36.508132    4719 out.go:358] Setting ErrFile to fd 2...
I0819 10:26:36.508136    4719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 10:26:36.508319    4719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
I0819 10:26:36.508900    4719 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 10:26:36.508993    4719 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 10:26:36.509328    4719 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0819 10:26:36.509373    4719 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0819 10:26:36.517837    4719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51066
I0819 10:26:36.518277    4719 main.go:141] libmachine: () Calling .GetVersion
I0819 10:26:36.518689    4719 main.go:141] libmachine: Using API Version  1
I0819 10:26:36.518699    4719 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 10:26:36.518952    4719 main.go:141] libmachine: () Calling .GetMachineName
I0819 10:26:36.519077    4719 main.go:141] libmachine: (functional-622000) Calling .GetState
I0819 10:26:36.519166    4719 main.go:141] libmachine: (functional-622000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0819 10:26:36.519237    4719 main.go:141] libmachine: (functional-622000) DBG | hyperkit pid from json: 3102
I0819 10:26:36.520521    4719 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0819 10:26:36.520544    4719 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0819 10:26:36.528962    4719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51068
I0819 10:26:36.529328    4719 main.go:141] libmachine: () Calling .GetVersion
I0819 10:26:36.529648    4719 main.go:141] libmachine: Using API Version  1
I0819 10:26:36.529658    4719 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 10:26:36.529892    4719 main.go:141] libmachine: () Calling .GetMachineName
I0819 10:26:36.530011    4719 main.go:141] libmachine: (functional-622000) Calling .DriverName
I0819 10:26:36.530183    4719 ssh_runner.go:195] Run: systemctl --version
I0819 10:26:36.530200    4719 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
I0819 10:26:36.530290    4719 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
I0819 10:26:36.530365    4719 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
I0819 10:26:36.530466    4719 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
I0819 10:26:36.530571    4719 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
I0819 10:26:36.568045    4719 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0819 10:26:36.587772    4719 main.go:141] libmachine: Making call to close driver server
I0819 10:26:36.587781    4719 main.go:141] libmachine: (functional-622000) Calling .Close
I0819 10:26:36.587951    4719 main.go:141] libmachine: Successfully made call to close driver server
I0819 10:26:36.587959    4719 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 10:26:36.587964    4719 main.go:141] libmachine: Making call to close driver server
I0819 10:26:36.587968    4719 main.go:141] libmachine: (functional-622000) Calling .Close
I0819 10:26:36.587967    4719 main.go:141] libmachine: (functional-622000) DBG | Closing plugin on server side
I0819 10:26:36.588131    4719 main.go:141] libmachine: (functional-622000) DBG | Closing plugin on server side
I0819 10:26:36.588131    4719 main.go:141] libmachine: Successfully made call to close driver server
I0819 10:26:36.588144    4719 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-622000 image ls --format json --alsologtostderr:
[{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-622000"],"size":"4940000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0f0eda053dc5c4c82
40f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43300000"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["reg
istry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-622000 image ls --format json --alsologtostderr:
I0819 10:26:36.251399    4715 out.go:345] Setting OutFile to fd 1 ...
I0819 10:26:36.275691    4715 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 10:26:36.275729    4715 out.go:358] Setting ErrFile to fd 2...
I0819 10:26:36.275740    4715 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 10:26:36.276007    4715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
I0819 10:26:36.313179    4715 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 10:26:36.313370    4715 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 10:26:36.313912    4715 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0819 10:26:36.313971    4715 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0819 10:26:36.323021    4715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51061
I0819 10:26:36.323438    4715 main.go:141] libmachine: () Calling .GetVersion
I0819 10:26:36.323852    4715 main.go:141] libmachine: Using API Version  1
I0819 10:26:36.323881    4715 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 10:26:36.324116    4715 main.go:141] libmachine: () Calling .GetMachineName
I0819 10:26:36.324232    4715 main.go:141] libmachine: (functional-622000) Calling .GetState
I0819 10:26:36.324323    4715 main.go:141] libmachine: (functional-622000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0819 10:26:36.324392    4715 main.go:141] libmachine: (functional-622000) DBG | hyperkit pid from json: 3102
I0819 10:26:36.325650    4715 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0819 10:26:36.325672    4715 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0819 10:26:36.334242    4715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51063
I0819 10:26:36.334614    4715 main.go:141] libmachine: () Calling .GetVersion
I0819 10:26:36.334933    4715 main.go:141] libmachine: Using API Version  1
I0819 10:26:36.334944    4715 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 10:26:36.335156    4715 main.go:141] libmachine: () Calling .GetMachineName
I0819 10:26:36.335264    4715 main.go:141] libmachine: (functional-622000) Calling .DriverName
I0819 10:26:36.335431    4715 ssh_runner.go:195] Run: systemctl --version
I0819 10:26:36.335453    4715 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
I0819 10:26:36.335545    4715 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
I0819 10:26:36.335633    4715 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
I0819 10:26:36.335721    4715 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
I0819 10:26:36.335806    4715 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
I0819 10:26:36.383305    4715 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0819 10:26:36.427768    4715 main.go:141] libmachine: Making call to close driver server
I0819 10:26:36.427781    4715 main.go:141] libmachine: (functional-622000) Calling .Close
I0819 10:26:36.427929    4715 main.go:141] libmachine: Successfully made call to close driver server
I0819 10:26:36.427939    4715 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 10:26:36.427945    4715 main.go:141] libmachine: Making call to close driver server
I0819 10:26:36.427951    4715 main.go:141] libmachine: (functional-622000) Calling .Close
I0819 10:26:36.427963    4715 main.go:141] libmachine: (functional-622000) DBG | Closing plugin on server side
I0819 10:26:36.428072    4715 main.go:141] libmachine: (functional-622000) DBG | Closing plugin on server side
I0819 10:26:36.428092    4715 main.go:141] libmachine: Successfully made call to close driver server
I0819 10:26:36.428106    4715 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-622000 image ls --format yaml --alsologtostderr:
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-622000
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43300000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-622000 image ls --format yaml --alsologtostderr:
I0819 10:26:36.034474    4711 out.go:345] Setting OutFile to fd 1 ...
I0819 10:26:36.035267    4711 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 10:26:36.035276    4711 out.go:358] Setting ErrFile to fd 2...
I0819 10:26:36.035282    4711 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 10:26:36.035812    4711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
I0819 10:26:36.036410    4711 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 10:26:36.036500    4711 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 10:26:36.036830    4711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0819 10:26:36.036885    4711 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0819 10:26:36.045565    4711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51056
I0819 10:26:36.046029    4711 main.go:141] libmachine: () Calling .GetVersion
I0819 10:26:36.046444    4711 main.go:141] libmachine: Using API Version  1
I0819 10:26:36.046456    4711 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 10:26:36.046671    4711 main.go:141] libmachine: () Calling .GetMachineName
I0819 10:26:36.046773    4711 main.go:141] libmachine: (functional-622000) Calling .GetState
I0819 10:26:36.046862    4711 main.go:141] libmachine: (functional-622000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0819 10:26:36.046947    4711 main.go:141] libmachine: (functional-622000) DBG | hyperkit pid from json: 3102
I0819 10:26:36.048228    4711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0819 10:26:36.048252    4711 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0819 10:26:36.056761    4711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51058
I0819 10:26:36.057130    4711 main.go:141] libmachine: () Calling .GetVersion
I0819 10:26:36.057459    4711 main.go:141] libmachine: Using API Version  1
I0819 10:26:36.057468    4711 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 10:26:36.057690    4711 main.go:141] libmachine: () Calling .GetMachineName
I0819 10:26:36.057799    4711 main.go:141] libmachine: (functional-622000) Calling .DriverName
I0819 10:26:36.057968    4711 ssh_runner.go:195] Run: systemctl --version
I0819 10:26:36.057986    4711 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
I0819 10:26:36.058061    4711 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
I0819 10:26:36.058146    4711 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
I0819 10:26:36.058222    4711 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
I0819 10:26:36.058314    4711 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
I0819 10:26:36.105286    4711 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0819 10:26:36.171672    4711 main.go:141] libmachine: Making call to close driver server
I0819 10:26:36.171682    4711 main.go:141] libmachine: (functional-622000) Calling .Close
I0819 10:26:36.171844    4711 main.go:141] libmachine: Successfully made call to close driver server
I0819 10:26:36.171853    4711 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 10:26:36.171858    4711 main.go:141] libmachine: Making call to close driver server
I0819 10:26:36.171863    4711 main.go:141] libmachine: (functional-622000) Calling .Close
I0819 10:26:36.171883    4711 main.go:141] libmachine: (functional-622000) DBG | Closing plugin on server side
I0819 10:26:36.171998    4711 main.go:141] libmachine: (functional-622000) DBG | Closing plugin on server side
I0819 10:26:36.172007    4711 main.go:141] libmachine: Successfully made call to close driver server
I0819 10:26:36.172018    4711 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-622000 ssh pgrep buildkitd: exit status 1 (131.526926ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image build -t localhost/my-image:functional-622000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-622000 image build -t localhost/my-image:functional-622000 testdata/build --alsologtostderr: (3.838747646s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-622000 image build -t localhost/my-image:functional-622000 testdata/build --alsologtostderr:
I0819 10:26:36.799286    4728 out.go:345] Setting OutFile to fd 1 ...
I0819 10:26:36.799549    4728 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 10:26:36.799555    4728 out.go:358] Setting ErrFile to fd 2...
I0819 10:26:36.799559    4728 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 10:26:36.799729    4728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
I0819 10:26:36.800323    4728 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 10:26:36.800974    4728 config.go:182] Loaded profile config "functional-622000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 10:26:36.801332    4728 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0819 10:26:36.801378    4728 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0819 10:26:36.809616    4728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51078
I0819 10:26:36.810067    4728 main.go:141] libmachine: () Calling .GetVersion
I0819 10:26:36.810502    4728 main.go:141] libmachine: Using API Version  1
I0819 10:26:36.810513    4728 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 10:26:36.810746    4728 main.go:141] libmachine: () Calling .GetMachineName
I0819 10:26:36.810876    4728 main.go:141] libmachine: (functional-622000) Calling .GetState
I0819 10:26:36.810974    4728 main.go:141] libmachine: (functional-622000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0819 10:26:36.811040    4728 main.go:141] libmachine: (functional-622000) DBG | hyperkit pid from json: 3102
I0819 10:26:36.812294    4728 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0819 10:26:36.812317    4728 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0819 10:26:36.820732    4728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51080
I0819 10:26:36.821102    4728 main.go:141] libmachine: () Calling .GetVersion
I0819 10:26:36.821451    4728 main.go:141] libmachine: Using API Version  1
I0819 10:26:36.821467    4728 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 10:26:36.821701    4728 main.go:141] libmachine: () Calling .GetMachineName
I0819 10:26:36.821824    4728 main.go:141] libmachine: (functional-622000) Calling .DriverName
I0819 10:26:36.821988    4728 ssh_runner.go:195] Run: systemctl --version
I0819 10:26:36.822004    4728 main.go:141] libmachine: (functional-622000) Calling .GetSSHHostname
I0819 10:26:36.822087    4728 main.go:141] libmachine: (functional-622000) Calling .GetSSHPort
I0819 10:26:36.822169    4728 main.go:141] libmachine: (functional-622000) Calling .GetSSHKeyPath
I0819 10:26:36.822246    4728 main.go:141] libmachine: (functional-622000) Calling .GetSSHUsername
I0819 10:26:36.822336    4728 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/functional-622000/id_rsa Username:docker}
I0819 10:26:36.858516    4728 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1719733392.tar
I0819 10:26:36.858607    4728 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 10:26:36.873092    4728 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1719733392.tar
I0819 10:26:36.876820    4728 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1719733392.tar: stat -c "%s %y" /var/lib/minikube/build/build.1719733392.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1719733392.tar': No such file or directory
I0819 10:26:36.876851    4728 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1719733392.tar --> /var/lib/minikube/build/build.1719733392.tar (3072 bytes)
I0819 10:26:36.909502    4728 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1719733392
I0819 10:26:36.923088    4728 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1719733392 -xf /var/lib/minikube/build/build.1719733392.tar
I0819 10:26:36.936949    4728 docker.go:360] Building image: /var/lib/minikube/build/build.1719733392
I0819 10:26:36.937016    4728 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-622000 /var/lib/minikube/build/build.1719733392
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 1.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#5 DONE 1.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:338d302a86382545fb70d88234d3e5226e906479776000178aff2b052bbc7183 done
#8 naming to localhost/my-image:functional-622000 done
#8 DONE 0.1s
I0819 10:26:40.519299    4728 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-622000 /var/lib/minikube/build/build.1719733392: (3.582189629s)
I0819 10:26:40.519368    4728 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1719733392
I0819 10:26:40.538232    4728 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1719733392.tar
I0819 10:26:40.554680    4728 build_images.go:217] Built localhost/my-image:functional-622000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1719733392.tar
I0819 10:26:40.554707    4728 build_images.go:133] succeeded building to: functional-622000
I0819 10:26:40.554712    4728 build_images.go:134] failed building to: 
I0819 10:26:40.554726    4728 main.go:141] libmachine: Making call to close driver server
I0819 10:26:40.554733    4728 main.go:141] libmachine: (functional-622000) Calling .Close
I0819 10:26:40.554891    4728 main.go:141] libmachine: Successfully made call to close driver server
I0819 10:26:40.554910    4728 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 10:26:40.554922    4728 main.go:141] libmachine: Making call to close driver server
I0819 10:26:40.554941    4728 main.go:141] libmachine: (functional-622000) Calling .Close
I0819 10:26:40.555092    4728 main.go:141] libmachine: (functional-622000) DBG | Closing plugin on server side
I0819 10:26:40.555112    4728 main.go:141] libmachine: Successfully made call to close driver server
I0819 10:26:40.555122    4728 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.918614189s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-622000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image load --daemon kicbase/echo-server:functional-622000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image load --daemon kicbase/echo-server:functional-622000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-622000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image load --daemon kicbase/echo-server:functional-622000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image save kicbase/echo-server:functional-622000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image rm kicbase/echo-server:functional-622000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-622000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 image save --daemon kicbase/echo-server:functional-622000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-622000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-622000 docker-env) && out/minikube-darwin-amd64 status -p functional-622000"
2024/08/19 10:26:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-622000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-622000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-622000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-622000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-622000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-431000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.31s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (40.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-854000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-854000 --driver=hyperkit : (40.526557294s)
--- PASS: TestImageBuild/serial/Setup (40.53s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-854000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-854000: (1.594403611s)
--- PASS: TestImageBuild/serial/NormalBuild (1.59s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-854000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-854000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.56s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-854000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-557000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-557000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m17.062864247s)
--- PASS: TestJSONOutput/start/Command (77.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-557000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-557000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-557000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-557000 --output=json --user=testUser: (8.327913613s)
--- PASS: TestJSONOutput/stop/Command (8.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-044000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-044000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (361.465404ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c94ad5e-91c2-4517-926e-beeadf922f35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-044000] minikube v1.33.1 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"437f14a0-24ed-456c-a452-919917d6d327","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19478"}}
	{"specversion":"1.0","id":"ce3769f2-a821-4f1a-95a6-d79d2078a6ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig"}}
	{"specversion":"1.0","id":"ea5ebab3-3838-4951-827f-6b4cc11a106d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7adcbac8-892d-4e34-ba67-e4a96f2ae9d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a13e430a-dc75-4e06-a607-2c58dd4de464","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube"}}
	{"specversion":"1.0","id":"440020a2-0d92-4824-8b3d-1bd4d94ecb36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f2b41be8-83de-4ade-910a-39649ac72d6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-044000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-044000
--- PASS: TestErrorJSONOutput (0.58s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (89.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-305000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-305000 --driver=hyperkit : (39.933192097s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-307000 --driver=hyperkit 
E0819 11:00:29.104551    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:00:43.510720    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-307000 --driver=hyperkit : (38.120078226s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-305000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-307000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-307000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-307000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-307000: (5.239226959s)
helpers_test.go:175: Cleaning up "first-305000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-305000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-305000: (5.273107175s)
--- PASS: TestMinikubeProfile (89.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-708000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0819 11:03:46.581822    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-708000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m48.845591011s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-708000 -- rollout status deployment/busybox: (2.660080597s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- exec busybox-7dff88458-qx7dt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- exec busybox-7dff88458-txnrm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- exec busybox-7dff88458-qx7dt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- exec busybox-7dff88458-txnrm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- exec busybox-7dff88458-qx7dt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- exec busybox-7dff88458-txnrm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- exec busybox-7dff88458-qx7dt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0819 11:05:29.108237    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- exec busybox-7dff88458-qx7dt -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- exec busybox-7dff88458-txnrm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-708000 -- exec busybox-7dff88458-txnrm -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-708000 -v 3 --alsologtostderr
E0819 11:05:43.515696    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-708000 -v 3 --alsologtostderr: (45.196840125s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.52s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-708000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp testdata/cp-test.txt multinode-708000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp multinode-708000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3821694541/001/cp-test_multinode-708000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp multinode-708000:/home/docker/cp-test.txt multinode-708000-m02:/home/docker/cp-test_multinode-708000_multinode-708000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m02 "sudo cat /home/docker/cp-test_multinode-708000_multinode-708000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp multinode-708000:/home/docker/cp-test.txt multinode-708000-m03:/home/docker/cp-test_multinode-708000_multinode-708000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m03 "sudo cat /home/docker/cp-test_multinode-708000_multinode-708000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp testdata/cp-test.txt multinode-708000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp multinode-708000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3821694541/001/cp-test_multinode-708000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp multinode-708000-m02:/home/docker/cp-test.txt multinode-708000:/home/docker/cp-test_multinode-708000-m02_multinode-708000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000 "sudo cat /home/docker/cp-test_multinode-708000-m02_multinode-708000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp multinode-708000-m02:/home/docker/cp-test.txt multinode-708000-m03:/home/docker/cp-test_multinode-708000-m02_multinode-708000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m03 "sudo cat /home/docker/cp-test_multinode-708000-m02_multinode-708000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp testdata/cp-test.txt multinode-708000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp multinode-708000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile3821694541/001/cp-test_multinode-708000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp multinode-708000-m03:/home/docker/cp-test.txt multinode-708000:/home/docker/cp-test_multinode-708000-m03_multinode-708000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000 "sudo cat /home/docker/cp-test_multinode-708000-m03_multinode-708000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 cp multinode-708000-m03:/home/docker/cp-test.txt multinode-708000-m02:/home/docker/cp-test_multinode-708000-m03_multinode-708000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 ssh -n multinode-708000-m02 "sudo cat /home/docker/cp-test_multinode-708000-m03_multinode-708000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-708000 node stop m03: (2.333820089s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-708000 status: exit status 7 (250.779018ms)

                                                
                                                
-- stdout --
	multinode-708000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-708000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-708000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-708000 status --alsologtostderr: exit status 7 (252.550079ms)

                                                
                                                
-- stdout --
	multinode-708000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-708000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-708000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:06:23.356286    7907 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:06:23.356550    7907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:06:23.356556    7907 out.go:358] Setting ErrFile to fd 2...
	I0819 11:06:23.356559    7907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:06:23.356720    7907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 11:06:23.356898    7907 out.go:352] Setting JSON to false
	I0819 11:06:23.356919    7907 mustload.go:65] Loading cluster: multinode-708000
	I0819 11:06:23.356960    7907 notify.go:220] Checking for updates...
	I0819 11:06:23.357201    7907 config.go:182] Loaded profile config "multinode-708000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:06:23.357216    7907 status.go:255] checking status of multinode-708000 ...
	I0819 11:06:23.357568    7907 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:06:23.357610    7907 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:06:23.366763    7907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53069
	I0819 11:06:23.367196    7907 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:06:23.367598    7907 main.go:141] libmachine: Using API Version  1
	I0819 11:06:23.367607    7907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:06:23.367797    7907 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:06:23.367927    7907 main.go:141] libmachine: (multinode-708000) Calling .GetState
	I0819 11:06:23.368018    7907 main.go:141] libmachine: (multinode-708000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:06:23.368091    7907 main.go:141] libmachine: (multinode-708000) DBG | hyperkit pid from json: 7610
	I0819 11:06:23.369295    7907 status.go:330] multinode-708000 host status = "Running" (err=<nil>)
	I0819 11:06:23.369316    7907 host.go:66] Checking if "multinode-708000" exists ...
	I0819 11:06:23.369551    7907 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:06:23.369570    7907 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:06:23.377904    7907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53071
	I0819 11:06:23.378247    7907 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:06:23.378620    7907 main.go:141] libmachine: Using API Version  1
	I0819 11:06:23.378641    7907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:06:23.378861    7907 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:06:23.378979    7907 main.go:141] libmachine: (multinode-708000) Calling .GetIP
	I0819 11:06:23.379119    7907 host.go:66] Checking if "multinode-708000" exists ...
	I0819 11:06:23.379359    7907 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:06:23.379396    7907 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:06:23.391133    7907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53073
	I0819 11:06:23.391511    7907 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:06:23.391815    7907 main.go:141] libmachine: Using API Version  1
	I0819 11:06:23.391825    7907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:06:23.392042    7907 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:06:23.392133    7907 main.go:141] libmachine: (multinode-708000) Calling .DriverName
	I0819 11:06:23.392286    7907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:06:23.392314    7907 main.go:141] libmachine: (multinode-708000) Calling .GetSSHHostname
	I0819 11:06:23.392390    7907 main.go:141] libmachine: (multinode-708000) Calling .GetSSHPort
	I0819 11:06:23.392461    7907 main.go:141] libmachine: (multinode-708000) Calling .GetSSHKeyPath
	I0819 11:06:23.392555    7907 main.go:141] libmachine: (multinode-708000) Calling .GetSSHUsername
	I0819 11:06:23.392652    7907 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/multinode-708000/id_rsa Username:docker}
	I0819 11:06:23.425479    7907 ssh_runner.go:195] Run: systemctl --version
	I0819 11:06:23.429937    7907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:06:23.440477    7907 kubeconfig.go:125] found "multinode-708000" server: "https://192.169.0.13:8443"
	I0819 11:06:23.440499    7907 api_server.go:166] Checking apiserver status ...
	I0819 11:06:23.440538    7907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:06:23.451274    7907 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1925/cgroup
	W0819 11:06:23.458387    7907 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1925/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:06:23.458434    7907 ssh_runner.go:195] Run: ls
	I0819 11:06:23.461895    7907 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0819 11:06:23.465017    7907 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0819 11:06:23.465029    7907 status.go:422] multinode-708000 apiserver status = Running (err=<nil>)
	I0819 11:06:23.465038    7907 status.go:257] multinode-708000 status: &{Name:multinode-708000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:06:23.465053    7907 status.go:255] checking status of multinode-708000-m02 ...
	I0819 11:06:23.465324    7907 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:06:23.465345    7907 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:06:23.474090    7907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53077
	I0819 11:06:23.474440    7907 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:06:23.474773    7907 main.go:141] libmachine: Using API Version  1
	I0819 11:06:23.474787    7907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:06:23.474987    7907 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:06:23.475080    7907 main.go:141] libmachine: (multinode-708000-m02) Calling .GetState
	I0819 11:06:23.475152    7907 main.go:141] libmachine: (multinode-708000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:06:23.475223    7907 main.go:141] libmachine: (multinode-708000-m02) DBG | hyperkit pid from json: 7628
	I0819 11:06:23.476423    7907 status.go:330] multinode-708000-m02 host status = "Running" (err=<nil>)
	I0819 11:06:23.476432    7907 host.go:66] Checking if "multinode-708000-m02" exists ...
	I0819 11:06:23.476685    7907 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:06:23.476717    7907 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:06:23.485187    7907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53079
	I0819 11:06:23.485508    7907 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:06:23.485863    7907 main.go:141] libmachine: Using API Version  1
	I0819 11:06:23.485880    7907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:06:23.486070    7907 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:06:23.486186    7907 main.go:141] libmachine: (multinode-708000-m02) Calling .GetIP
	I0819 11:06:23.486262    7907 host.go:66] Checking if "multinode-708000-m02" exists ...
	I0819 11:06:23.486515    7907 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:06:23.486538    7907 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:06:23.494960    7907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53081
	I0819 11:06:23.495296    7907 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:06:23.495639    7907 main.go:141] libmachine: Using API Version  1
	I0819 11:06:23.495656    7907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:06:23.495875    7907 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:06:23.495977    7907 main.go:141] libmachine: (multinode-708000-m02) Calling .DriverName
	I0819 11:06:23.496110    7907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:06:23.496120    7907 main.go:141] libmachine: (multinode-708000-m02) Calling .GetSSHHostname
	I0819 11:06:23.496195    7907 main.go:141] libmachine: (multinode-708000-m02) Calling .GetSSHPort
	I0819 11:06:23.496274    7907 main.go:141] libmachine: (multinode-708000-m02) Calling .GetSSHKeyPath
	I0819 11:06:23.496354    7907 main.go:141] libmachine: (multinode-708000-m02) Calling .GetSSHUsername
	I0819 11:06:23.496427    7907 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19478-1622/.minikube/machines/multinode-708000-m02/id_rsa Username:docker}
	I0819 11:06:23.530200    7907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:06:23.541468    7907 status.go:257] multinode-708000-m02 status: &{Name:multinode-708000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:06:23.541482    7907 status.go:255] checking status of multinode-708000-m03 ...
	I0819 11:06:23.541743    7907 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:06:23.541766    7907 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:06:23.550395    7907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53084
	I0819 11:06:23.550769    7907 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:06:23.551087    7907 main.go:141] libmachine: Using API Version  1
	I0819 11:06:23.551098    7907 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:06:23.551323    7907 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:06:23.551431    7907 main.go:141] libmachine: (multinode-708000-m03) Calling .GetState
	I0819 11:06:23.551512    7907 main.go:141] libmachine: (multinode-708000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:06:23.551586    7907 main.go:141] libmachine: (multinode-708000-m03) DBG | hyperkit pid from json: 7701
	I0819 11:06:23.552746    7907 main.go:141] libmachine: (multinode-708000-m03) DBG | hyperkit pid 7701 missing from process table
	I0819 11:06:23.552772    7907 status.go:330] multinode-708000-m03 host status = "Stopped" (err=<nil>)
	I0819 11:06:23.552779    7907 status.go:343] host is not running, skipping remaining checks
	I0819 11:06:23.552785    7907 status.go:257] multinode-708000-m03 status: &{Name:multinode-708000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.84s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 node start m03 -v=7 --alsologtostderr
E0819 11:06:52.200644    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-708000 node start m03 -v=7 --alsologtostderr: (41.077468345s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (163.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-708000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-708000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-708000: (18.874215169s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-708000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-708000 --wait=true -v=8 --alsologtostderr: (2m24.964552187s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-708000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (163.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-708000 node delete m03: (2.924610593s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-708000 stop: (16.640381134s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-708000 status: exit status 7 (79.223808ms)

                                                
                                                
-- stdout --
	multinode-708000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-708000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-708000 status --alsologtostderr: exit status 7 (78.788344ms)

                                                
                                                
-- stdout --
	multinode-708000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-708000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:10:08.952351    8052 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:10:08.952619    8052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:08.952624    8052 out.go:358] Setting ErrFile to fd 2...
	I0819 11:10:08.952628    8052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:10:08.952797    8052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19478-1622/.minikube/bin
	I0819 11:10:08.952979    8052 out.go:352] Setting JSON to false
	I0819 11:10:08.953000    8052 mustload.go:65] Loading cluster: multinode-708000
	I0819 11:10:08.953040    8052 notify.go:220] Checking for updates...
	I0819 11:10:08.953296    8052 config.go:182] Loaded profile config "multinode-708000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:10:08.953312    8052 status.go:255] checking status of multinode-708000 ...
	I0819 11:10:08.953676    8052 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:10:08.953730    8052 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:10:08.962627    8052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53314
	I0819 11:10:08.962980    8052 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:10:08.963382    8052 main.go:141] libmachine: Using API Version  1
	I0819 11:10:08.963413    8052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:10:08.963612    8052 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:10:08.963707    8052 main.go:141] libmachine: (multinode-708000) Calling .GetState
	I0819 11:10:08.963811    8052 main.go:141] libmachine: (multinode-708000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:10:08.963845    8052 main.go:141] libmachine: (multinode-708000) DBG | hyperkit pid from json: 7975
	I0819 11:10:08.964759    8052 main.go:141] libmachine: (multinode-708000) DBG | hyperkit pid 7975 missing from process table
	I0819 11:10:08.964790    8052 status.go:330] multinode-708000 host status = "Stopped" (err=<nil>)
	I0819 11:10:08.964797    8052 status.go:343] host is not running, skipping remaining checks
	I0819 11:10:08.964803    8052 status.go:257] multinode-708000 status: &{Name:multinode-708000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:10:08.964825    8052 status.go:255] checking status of multinode-708000-m02 ...
	I0819 11:10:08.965065    8052 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0819 11:10:08.965089    8052 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0819 11:10:08.973356    8052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53316
	I0819 11:10:08.973668    8052 main.go:141] libmachine: () Calling .GetVersion
	I0819 11:10:08.974018    8052 main.go:141] libmachine: Using API Version  1
	I0819 11:10:08.974042    8052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 11:10:08.974246    8052 main.go:141] libmachine: () Calling .GetMachineName
	I0819 11:10:08.974350    8052 main.go:141] libmachine: (multinode-708000-m02) Calling .GetState
	I0819 11:10:08.974428    8052 main.go:141] libmachine: (multinode-708000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0819 11:10:08.974497    8052 main.go:141] libmachine: (multinode-708000-m02) DBG | hyperkit pid from json: 7990
	I0819 11:10:08.975401    8052 main.go:141] libmachine: (multinode-708000-m02) DBG | hyperkit pid 7990 missing from process table
	I0819 11:10:08.975433    8052 status.go:330] multinode-708000-m02 host status = "Stopped" (err=<nil>)
	I0819 11:10:08.975439    8052 status.go:343] host is not running, skipping remaining checks
	I0819 11:10:08.975447    8052 status.go:257] multinode-708000-m02 status: &{Name:multinode-708000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (99.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-708000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0819 11:10:29.065320    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:10:43.471888    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-708000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m38.934293746s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-708000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (99.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-708000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-708000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-708000-m02 --driver=hyperkit : exit status 14 (581.162693ms)

                                                
                                                
-- stdout --
	* [multinode-708000-m02] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19478
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19478-1622/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-708000-m02' is duplicated with machine name 'multinode-708000-m02' in profile 'multinode-708000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-708000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-708000-m03 --driver=hyperkit : (40.680035929s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-708000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-708000: exit status 80 (283.338606ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-708000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-708000-m03 already exists in multinode-708000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-708000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-708000-m03: (3.40036793s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.00s)

                                                
                                    
x
+
TestPreload (204.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-066000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-066000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m58.604953428s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-066000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-066000 image pull gcr.io/k8s-minikube/busybox: (1.36254907s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-066000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-066000: (8.388869235s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-066000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0819 11:15:29.068418    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:15:43.473064    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-066000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m11.083547024s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-066000 image list
helpers_test.go:175: Cleaning up "test-preload-066000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-066000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-066000: (5.240669003s)
--- PASS: TestPreload (204.83s)

                                                
                                    
x
+
TestSkaffold (109.76s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe741373861 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe741373861 version: (1.715616863s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-458000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-458000 --memory=2600 --driver=hyperkit : (36.2495601s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe741373861 run --minikube-profile skaffold-458000 --kube-context skaffold-458000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe741373861 run --minikube-profile skaffold-458000 --kube-context skaffold-458000 --status-check=true --port-forward=false --interactive=false: (54.052292349s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6b9d4dd5b7-8fdvc" [6bed060a-8505-4cac-a713-489c292f0b38] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004259465s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7dd4c7c978-hscxs" [ed986cff-0d90-40c5-9382-6e662e9687de] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003483115s
helpers_test.go:175: Cleaning up "skaffold-458000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-458000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-458000: (5.241578798s)
--- PASS: TestSkaffold (109.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (98.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2658705970 start -p running-upgrade-908000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2658705970 start -p running-upgrade-908000 --memory=2200 --vm-driver=hyperkit : (58.046650029s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-908000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-908000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (34.04516937s)
helpers_test.go:175: Cleaning up "running-upgrade-908000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-908000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-908000: (5.238745455s)
--- PASS: TestRunningBinaryUpgrade (98.91s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.06s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
E0819 11:20:26.545068    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/functional-622000/client.crt: no such file or directory" logger="UnhandledError"
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19478
- KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1115270593/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1115270593/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1115270593/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1115270593/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
E0819 11:20:29.068894    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19478-1622/.minikube/profiles/addons-080000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.06s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.67s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19478
- KUBECONFIG=/Users/jenkins/minikube-integration/19478-1622/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2370513929/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2370513929/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2370513929/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2370513929/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.67s)

                                                
                                    

Test skip (18/201)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard